2026-01-13 00:00:07.939575 | Job console starting 2026-01-13 00:00:07.971918 | Updating git repos 2026-01-13 00:00:08.339342 | Cloning repos into workspace 2026-01-13 00:00:08.612592 | Restoring repo states 2026-01-13 00:00:08.628980 | Merging changes 2026-01-13 00:00:08.629025 | Checking out repos 2026-01-13 00:00:09.143811 | Preparing playbooks 2026-01-13 00:00:10.250991 | Running Ansible setup 2026-01-13 00:00:18.051864 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-13 00:00:21.367234 | 2026-01-13 00:00:21.367434 | PLAY [Base pre] 2026-01-13 00:00:21.394744 | 2026-01-13 00:00:21.406163 | TASK [Setup log path fact] 2026-01-13 00:00:21.442010 | orchestrator | ok 2026-01-13 00:00:21.540842 | 2026-01-13 00:00:21.541106 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-13 00:00:21.604387 | orchestrator | ok 2026-01-13 00:00:21.631936 | 2026-01-13 00:00:21.632096 | TASK [emit-job-header : Print job information] 2026-01-13 00:00:21.717653 | # Job Information 2026-01-13 00:00:21.717852 | Ansible Version: 2.16.14 2026-01-13 00:00:21.717886 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-13 00:00:21.717919 | Pipeline: periodic-midnight 2026-01-13 00:00:21.717941 | Executor: 521e9411259a 2026-01-13 00:00:21.717962 | Triggered by: https://github.com/osism/testbed 2026-01-13 00:00:21.717983 | Event ID: 266de1de9ea347ba87510853df01b929 2026-01-13 00:00:21.725514 | 2026-01-13 00:00:21.725673 | LOOP [emit-job-header : Print node information] 2026-01-13 00:00:22.065258 | orchestrator | ok: 2026-01-13 00:00:22.065560 | orchestrator | # Node Information 2026-01-13 00:00:22.065606 | orchestrator | Inventory Hostname: orchestrator 2026-01-13 00:00:22.065631 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-13 00:00:22.065653 | orchestrator | Username: zuul-testbed05 2026-01-13 00:00:22.065674 | orchestrator | Distro: Debian 12.13 2026-01-13 00:00:22.065697 | orchestrator | Provider: static-testbed 2026-01-13 00:00:22.065718 | orchestrator | Region: 2026-01-13 00:00:22.065740 | orchestrator | Label: testbed-orchestrator 2026-01-13 00:00:22.065759 | orchestrator | Product Name: OpenStack Nova 2026-01-13 00:00:22.065778 | orchestrator | Interface IP: 81.163.193.140 2026-01-13 00:00:22.104633 | 2026-01-13 00:00:22.104809 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-13 00:00:23.930960 | orchestrator -> localhost | changed 2026-01-13 00:00:23.940189 | 2026-01-13 00:00:23.940335 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-13 00:00:28.080271 | orchestrator -> localhost | changed 2026-01-13 00:00:28.131761 | 2026-01-13 00:00:28.131872 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-13 00:00:29.245943 | orchestrator -> localhost | ok 2026-01-13 00:00:29.252129 | 2026-01-13 00:00:29.252226 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-13 00:00:29.299913 | orchestrator | ok 2026-01-13 00:00:29.337087 | orchestrator | included: /var/lib/zuul/builds/ca76e7f80cbb4cdda68907de4afef11c/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-13 00:00:29.385012 | 2026-01-13 00:00:29.385112 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-13 00:00:32.577814 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-13 00:00:32.577986 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/ca76e7f80cbb4cdda68907de4afef11c/work/ca76e7f80cbb4cdda68907de4afef11c_id_rsa 2026-01-13 00:00:32.578019 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/ca76e7f80cbb4cdda68907de4afef11c/work/ca76e7f80cbb4cdda68907de4afef11c_id_rsa.pub 2026-01-13 00:00:32.578040 | orchestrator -> localhost | The key fingerprint is: 2026-01-13 00:00:32.578062 | orchestrator -> localhost | SHA256:5YhT0SjeywBjjwG3Wb4O5DgTueUtZW0cZGAjkPMaVdQ zuul-build-sshkey 2026-01-13 00:00:32.578082 | orchestrator -> localhost | The key's randomart image is: 2026-01-13 00:00:32.578107 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-13 00:00:32.578126 | orchestrator -> localhost | | o+ooB+=o | 2026-01-13 00:00:32.578144 | orchestrator -> localhost | | oo*B.=Eo. | 2026-01-13 00:00:32.578200 | orchestrator -> localhost | | o==B+o= . | 2026-01-13 00:00:32.578218 | orchestrator -> localhost | | .Oo++=.+ | 2026-01-13 00:00:32.578235 | orchestrator -> localhost | | =o= =oS.. | 2026-01-13 00:00:32.578257 | orchestrator -> localhost | | .o + .o | 2026-01-13 00:00:32.578274 | orchestrator -> localhost | | . | 2026-01-13 00:00:32.578291 | orchestrator -> localhost | | | 2026-01-13 00:00:32.578309 | orchestrator -> localhost | | | 2026-01-13 00:00:32.578326 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-13 00:00:32.578370 | orchestrator -> localhost | ok: Runtime: 0:00:01.675632 2026-01-13 00:00:32.585329 | 2026-01-13 00:00:32.585420 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-13 00:00:32.670384 | orchestrator | ok 2026-01-13 00:00:32.698425 | orchestrator | included: /var/lib/zuul/builds/ca76e7f80cbb4cdda68907de4afef11c/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-13 00:00:32.730768 | 2026-01-13 00:00:32.731991 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-13 00:00:32.805515 | orchestrator | skipping: Conditional result was False 2026-01-13 00:00:32.819536 | 2026-01-13 00:00:32.819661 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-13 00:00:34.106266 | orchestrator | changed 2026-01-13 00:00:34.133621 | 2026-01-13 00:00:34.133745 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-13 00:00:34.440459 | orchestrator | ok 2026-01-13 00:00:34.449136 | 2026-01-13 00:00:34.449313 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-13 00:00:34.970570 | orchestrator | ok 2026-01-13 00:00:34.997378 | 2026-01-13 00:00:34.998060 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-13 00:00:35.547306 | orchestrator | ok 2026-01-13 00:00:35.553587 | 2026-01-13 00:00:35.553683 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-13 00:00:35.587774 | orchestrator | skipping: Conditional result was False 2026-01-13 00:00:35.595274 | 2026-01-13 00:00:35.595378 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-13 00:00:37.457987 | orchestrator -> localhost | changed 2026-01-13 00:00:37.483026 | 2026-01-13 00:00:37.483438 | TASK [add-build-sshkey : Add back temp key] 2026-01-13 00:00:38.796205 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/ca76e7f80cbb4cdda68907de4afef11c/work/ca76e7f80cbb4cdda68907de4afef11c_id_rsa (zuul-build-sshkey) 2026-01-13 00:00:38.797076 | orchestrator -> localhost | ok: Runtime: 0:00:00.034179 2026-01-13 00:00:38.804157 | 2026-01-13 00:00:38.804254 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-13 00:00:39.714920 | orchestrator | ok 2026-01-13 00:00:39.727917 | 2026-01-13 00:00:39.728024 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-13 00:00:39.797671 | orchestrator | skipping: Conditional result was False 2026-01-13 00:00:40.015886 | 2026-01-13 00:00:40.016007 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-13 00:00:40.667904 | orchestrator | ok 2026-01-13 00:00:40.678713 | 2026-01-13 00:00:40.678818 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-13 00:00:40.723631 | orchestrator | ok 2026-01-13 00:00:40.734789 | 2026-01-13 00:00:40.734929 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-13 00:00:41.518123 | orchestrator -> localhost | ok 2026-01-13 00:00:41.524340 | 2026-01-13 00:00:41.524430 | TASK [validate-host : Collect information about the host] 2026-01-13 00:00:43.310034 | orchestrator | ok 2026-01-13 00:00:43.328850 | 2026-01-13 00:00:43.328961 | TASK [validate-host : Sanitize hostname] 2026-01-13 00:00:43.487845 | orchestrator | ok 2026-01-13 00:00:43.492427 | 2026-01-13 00:00:43.492511 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-13 00:00:45.610166 | orchestrator -> localhost | changed 2026-01-13 00:00:45.616032 | 2026-01-13 00:00:45.616122 | TASK [validate-host : Collect information about zuul worker] 2026-01-13 00:00:46.305966 | orchestrator | ok 2026-01-13 00:00:46.327052 | 2026-01-13 00:00:46.327160 | TASK [validate-host : Write out all zuul information for each host] 2026-01-13 00:00:48.062576 | orchestrator -> localhost | changed 2026-01-13 00:00:48.072160 | 2026-01-13 00:00:48.072253 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-13 00:00:48.415781 | orchestrator | ok 2026-01-13 00:00:48.422454 | 2026-01-13 00:00:48.422582 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-13 00:02:09.016458 | orchestrator | changed: 2026-01-13 00:02:09.016724 | orchestrator | .d..t...... src/ 2026-01-13 00:02:09.016761 | orchestrator | .d..t...... src/github.com/ 2026-01-13 00:02:09.016787 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-13 00:02:09.016810 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-13 00:02:09.016831 | orchestrator | RedHat.yml 2026-01-13 00:02:09.055178 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-13 00:02:09.055200 | orchestrator | RedHat.yml 2026-01-13 00:02:09.055264 | orchestrator | = 1.53.0"... 2026-01-13 00:02:20.657393 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-01-13 00:02:20.825136 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-13 00:02:21.249435 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-13 00:02:21.505229 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-13 00:02:22.447881 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-13 00:02:22.522832 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-13 00:02:23.072517 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-13 00:02:23.072573 | orchestrator | 2026-01-13 00:02:23.072578 | orchestrator | Providers are signed by their developers. 2026-01-13 00:02:23.072583 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-13 00:02:23.072588 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-13 00:02:23.072599 | orchestrator | 2026-01-13 00:02:23.072604 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-13 00:02:23.072608 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-13 00:02:23.072618 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-13 00:02:23.072623 | orchestrator | you run "tofu init" in the future. 2026-01-13 00:02:23.073146 | orchestrator | 2026-01-13 00:02:23.073180 | orchestrator | OpenTofu has been successfully initialized! 2026-01-13 00:02:23.073209 | orchestrator | 2026-01-13 00:02:23.073214 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-13 00:02:23.073218 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-13 00:02:23.073222 | orchestrator | should now work. 2026-01-13 00:02:23.073226 | orchestrator | 2026-01-13 00:02:23.073229 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-13 00:02:23.073233 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-13 00:02:23.073241 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-13 00:02:23.235234 | orchestrator | Created and switched to workspace "ci"! 2026-01-13 00:02:23.235315 | orchestrator | 2026-01-13 00:02:23.235322 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-13 00:02:23.235327 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-13 00:02:23.235351 | orchestrator | for this configuration. 2026-01-13 00:02:23.402488 | orchestrator | ci.auto.tfvars 2026-01-13 00:02:23.405779 | orchestrator | default_custom.tf 2026-01-13 00:02:24.466096 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-13 00:02:25.003090 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-13 00:02:25.274321 | orchestrator | 2026-01-13 00:02:25.274385 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-13 00:02:25.274393 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-13 00:02:25.274409 | orchestrator | + create 2026-01-13 00:02:25.274415 | orchestrator | <= read (data resources) 2026-01-13 00:02:25.274420 | orchestrator | 2026-01-13 00:02:25.274424 | orchestrator | OpenTofu will perform the following actions: 2026-01-13 00:02:25.274429 | orchestrator | 2026-01-13 00:02:25.274434 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-13 00:02:25.274439 | orchestrator | # (config refers to values not yet known) 2026-01-13 00:02:25.274443 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-13 00:02:25.274448 | orchestrator | + checksum = (known after apply) 2026-01-13 00:02:25.274452 | orchestrator | + created_at = (known after apply) 2026-01-13 00:02:25.274457 | orchestrator | + file = (known after apply) 2026-01-13 00:02:25.274461 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.274479 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.274483 | orchestrator | + min_disk_gb = (known after apply) 2026-01-13 00:02:25.274487 | orchestrator | + min_ram_mb = (known after apply) 2026-01-13 00:02:25.274491 | orchestrator | + most_recent = true 2026-01-13 00:02:25.274495 | orchestrator | + name = (known after apply) 2026-01-13 00:02:25.274499 | orchestrator | + protected = (known after apply) 2026-01-13 00:02:25.274504 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.274510 | orchestrator | + schema = (known after apply) 2026-01-13 00:02:25.274514 | orchestrator | + size_bytes = (known after apply) 2026-01-13 00:02:25.274518 | orchestrator | + tags = (known after apply) 2026-01-13 00:02:25.274522 | orchestrator | + updated_at = (known after apply) 2026-01-13 00:02:25.274526 | orchestrator | } 2026-01-13 00:02:25.274533 | orchestrator | 2026-01-13 00:02:25.274537 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-13 00:02:25.274541 | orchestrator | # (config refers to values not yet known) 2026-01-13 00:02:25.274546 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-13 00:02:25.274550 | orchestrator | + checksum = (known after apply) 2026-01-13 00:02:25.274554 | orchestrator | + created_at = (known after apply) 2026-01-13 00:02:25.274559 | orchestrator | + file = (known after apply) 2026-01-13 00:02:25.274563 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.274567 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.274571 | orchestrator | + min_disk_gb = (known after apply) 2026-01-13 00:02:25.274576 | orchestrator | + min_ram_mb = (known after apply) 2026-01-13 00:02:25.274580 | orchestrator | + most_recent = true 2026-01-13 00:02:25.274584 | orchestrator | + name = (known after apply) 2026-01-13 00:02:25.274588 | orchestrator | + protected = (known after apply) 2026-01-13 00:02:25.274592 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.274595 | orchestrator | + schema = (known after apply) 2026-01-13 00:02:25.274599 | orchestrator | + size_bytes = (known after apply) 2026-01-13 00:02:25.274603 | orchestrator | + tags = (known after apply) 2026-01-13 00:02:25.274607 | orchestrator | + updated_at = (known after apply) 2026-01-13 00:02:25.274610 | orchestrator | } 2026-01-13 00:02:25.274614 | orchestrator | 2026-01-13 00:02:25.274618 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-13 00:02:25.274622 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-13 00:02:25.274626 | orchestrator | + content = (known after apply) 2026-01-13 00:02:25.274630 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-13 00:02:25.274634 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-13 00:02:25.274638 | orchestrator | + content_md5 = (known after apply) 2026-01-13 00:02:25.274642 | orchestrator | + content_sha1 = (known after apply) 2026-01-13 00:02:25.274646 | orchestrator | + content_sha256 = (known after apply) 2026-01-13 00:02:25.274650 | orchestrator | + content_sha512 = (known after apply) 2026-01-13 00:02:25.274654 | orchestrator | + directory_permission = "0777" 2026-01-13 00:02:25.274658 | orchestrator | + file_permission = "0644" 2026-01-13 00:02:25.274663 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-13 00:02:25.274667 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.274671 | orchestrator | } 2026-01-13 00:02:25.274677 | orchestrator | 2026-01-13 00:02:25.274681 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-13 00:02:25.274685 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-13 00:02:25.274690 | orchestrator | + content = (known after apply) 2026-01-13 00:02:25.274694 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-13 00:02:25.274698 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-13 00:02:25.274702 | orchestrator | + content_md5 = (known after apply) 2026-01-13 00:02:25.274706 | orchestrator | + content_sha1 = (known after apply) 2026-01-13 00:02:25.274710 | orchestrator | + content_sha256 = (known after apply) 2026-01-13 00:02:25.274714 | orchestrator | + content_sha512 = (known after apply) 2026-01-13 00:02:25.274718 | orchestrator | + directory_permission = "0777" 2026-01-13 00:02:25.274721 | orchestrator | + file_permission = "0644" 2026-01-13 00:02:25.274729 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-13 00:02:25.274732 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.274736 | orchestrator | } 2026-01-13 00:02:25.274740 | orchestrator | 2026-01-13 00:02:25.274747 | orchestrator | # local_file.inventory will be created 2026-01-13 00:02:25.274751 | orchestrator | + resource "local_file" "inventory" { 2026-01-13 00:02:25.274755 | orchestrator | + content = (known after apply) 2026-01-13 00:02:25.274759 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-13 00:02:25.274762 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-13 00:02:25.274766 | orchestrator | + content_md5 = (known after apply) 2026-01-13 00:02:25.274770 | orchestrator | + content_sha1 = (known after apply) 2026-01-13 00:02:25.274774 | orchestrator | + content_sha256 = (known after apply) 2026-01-13 00:02:25.274778 | orchestrator | + content_sha512 = (known after apply) 2026-01-13 00:02:25.274782 | orchestrator | + directory_permission = "0777" 2026-01-13 00:02:25.274785 | orchestrator | + file_permission = "0644" 2026-01-13 00:02:25.274789 | orchestrator | + filename = "inventory.ci" 2026-01-13 00:02:25.274793 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.274797 | orchestrator | } 2026-01-13 00:02:25.274802 | orchestrator | 2026-01-13 00:02:25.274806 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-13 00:02:25.274810 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-13 00:02:25.274814 | orchestrator | + content = (sensitive value) 2026-01-13 00:02:25.274818 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-13 00:02:25.274822 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-13 00:02:25.274826 | orchestrator | + content_md5 = (known after apply) 2026-01-13 00:02:25.274829 | orchestrator | + content_sha1 = (known after apply) 2026-01-13 00:02:25.274833 | orchestrator | + content_sha256 = (known after apply) 2026-01-13 00:02:25.274837 | orchestrator | + content_sha512 = (known after apply) 2026-01-13 00:02:25.274841 | orchestrator | + directory_permission = "0700" 2026-01-13 00:02:25.274845 | orchestrator | + file_permission = "0600" 2026-01-13 00:02:25.274848 | orchestrator | + filename = ".id_rsa.ci" 2026-01-13 00:02:25.274852 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.274856 | orchestrator | } 2026-01-13 00:02:25.274861 | orchestrator | 2026-01-13 00:02:25.274866 | orchestrator | # null_resource.node_semaphore will be created 2026-01-13 00:02:25.274870 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-13 00:02:25.274874 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.274878 | orchestrator | } 2026-01-13 00:02:25.274882 | orchestrator | 2026-01-13 00:02:25.274887 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-13 00:02:25.274891 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-13 00:02:25.274896 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.274900 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.274904 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.274908 | orchestrator | + image_id = (known after apply) 2026-01-13 00:02:25.274912 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.274916 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-13 00:02:25.274920 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.274924 | orchestrator | + size = 80 2026-01-13 00:02:25.274928 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.274932 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.274935 | orchestrator | } 2026-01-13 00:02:25.274941 | orchestrator | 2026-01-13 00:02:25.274945 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-13 00:02:25.274949 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-13 00:02:25.274952 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.274956 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.274960 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.274967 | orchestrator | + image_id = (known after apply) 2026-01-13 00:02:25.274971 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.274975 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-13 00:02:25.274979 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.274983 | orchestrator | + size = 80 2026-01-13 00:02:25.274988 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.274992 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.274996 | orchestrator | } 2026-01-13 00:02:25.275000 | orchestrator | 2026-01-13 00:02:25.275004 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-13 00:02:25.275009 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-13 00:02:25.275016 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.275021 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275025 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275029 | orchestrator | + image_id = (known after apply) 2026-01-13 00:02:25.275033 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.275037 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-13 00:02:25.275041 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275045 | orchestrator | + size = 80 2026-01-13 00:02:25.275048 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.275052 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.275056 | orchestrator | } 2026-01-13 00:02:25.275060 | orchestrator | 2026-01-13 00:02:25.275064 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-13 00:02:25.275068 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-13 00:02:25.275071 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.275075 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275079 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275083 | orchestrator | + image_id = (known after apply) 2026-01-13 00:02:25.275087 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.275091 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-13 00:02:25.275095 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275099 | orchestrator | + size = 80 2026-01-13 00:02:25.275103 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.275108 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.275112 | orchestrator | } 2026-01-13 00:02:25.275118 | orchestrator | 2026-01-13 00:02:25.275122 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-13 00:02:25.275126 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-13 00:02:25.275130 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.275134 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275139 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275143 | orchestrator | + image_id = (known after apply) 2026-01-13 00:02:25.275147 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.275153 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-13 00:02:25.275157 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275161 | orchestrator | + size = 80 2026-01-13 00:02:25.275165 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.275169 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.275173 | orchestrator | } 2026-01-13 00:02:25.275177 | orchestrator | 2026-01-13 00:02:25.275180 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-13 00:02:25.275184 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-13 00:02:25.275188 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.275192 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275196 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275203 | orchestrator | + image_id = (known after apply) 2026-01-13 00:02:25.275207 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.275211 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-13 00:02:25.275215 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275219 | orchestrator | + size = 80 2026-01-13 00:02:25.275223 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.275227 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.275231 | orchestrator | } 2026-01-13 00:02:25.275235 | orchestrator | 2026-01-13 00:02:25.275240 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-13 00:02:25.275256 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-13 00:02:25.275260 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.275265 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275269 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275273 | orchestrator | + image_id = (known after apply) 2026-01-13 00:02:25.275277 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.275282 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-13 00:02:25.275285 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275289 | orchestrator | + size = 80 2026-01-13 00:02:25.275293 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.275297 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.275301 | orchestrator | } 2026-01-13 00:02:25.275306 | orchestrator | 2026-01-13 00:02:25.275310 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-13 00:02:25.275314 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-13 00:02:25.275318 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.275322 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275326 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275330 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.275334 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-13 00:02:25.275338 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275342 | orchestrator | + size = 20 2026-01-13 00:02:25.275346 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.275350 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.275354 | orchestrator | } 2026-01-13 00:02:25.275359 | orchestrator | 2026-01-13 00:02:25.275363 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-13 00:02:25.275367 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-13 00:02:25.275371 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.275376 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275380 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275384 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.275388 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-13 00:02:25.275393 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275397 | orchestrator | + size = 20 2026-01-13 00:02:25.275401 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.275404 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.275408 | orchestrator | } 2026-01-13 00:02:25.275412 | orchestrator | 2026-01-13 00:02:25.275416 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-13 00:02:25.275420 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-13 00:02:25.275423 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.275427 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275431 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275435 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.275439 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-13 00:02:25.275443 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275450 | orchestrator | + size = 20 2026-01-13 00:02:25.275454 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.275459 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.275463 | orchestrator | } 2026-01-13 00:02:25.275467 | orchestrator | 2026-01-13 00:02:25.275471 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-13 00:02:25.275475 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-13 00:02:25.275479 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.275484 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275488 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275492 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.275496 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-13 00:02:25.275501 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275505 | orchestrator | + size = 20 2026-01-13 00:02:25.275509 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.275513 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.275518 | orchestrator | } 2026-01-13 00:02:25.275523 | orchestrator | 2026-01-13 00:02:25.275527 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-13 00:02:25.275531 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-13 00:02:25.275535 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.275539 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275543 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275547 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.275550 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-13 00:02:25.275554 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275560 | orchestrator | + size = 20 2026-01-13 00:02:25.275564 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.275568 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.275572 | orchestrator | } 2026-01-13 00:02:25.275576 | orchestrator | 2026-01-13 00:02:25.275580 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-13 00:02:25.275584 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-13 00:02:25.275588 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.275592 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275596 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275601 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.275605 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-13 00:02:25.275609 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275613 | orchestrator | + size = 20 2026-01-13 00:02:25.275618 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.275622 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.275626 | orchestrator | } 2026-01-13 00:02:25.275630 | orchestrator | 2026-01-13 00:02:25.275635 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-13 00:02:25.275638 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-13 00:02:25.275642 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.275646 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275650 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275653 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.275657 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-13 00:02:25.275661 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275665 | orchestrator | + size = 20 2026-01-13 00:02:25.275669 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.275672 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.275676 | orchestrator | } 2026-01-13 00:02:25.275680 | orchestrator | 2026-01-13 00:02:25.275684 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-13 00:02:25.275688 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-13 00:02:25.275695 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.275699 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275703 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275707 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.275710 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-13 00:02:25.275714 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275718 | orchestrator | + size = 20 2026-01-13 00:02:25.275722 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.275726 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.275729 | orchestrator | } 2026-01-13 00:02:25.275733 | orchestrator | 2026-01-13 00:02:25.275737 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-13 00:02:25.275741 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-13 00:02:25.275745 | orchestrator | + attachment = (known after apply) 2026-01-13 00:02:25.275749 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275752 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275756 | orchestrator | + metadata = (known after apply) 2026-01-13 00:02:25.275760 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-13 00:02:25.275764 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275767 | orchestrator | + size = 20 2026-01-13 00:02:25.275771 | orchestrator | + volume_retype_policy = "never" 2026-01-13 00:02:25.275775 | orchestrator | + volume_type = "ssd" 2026-01-13 00:02:25.275779 | orchestrator | } 2026-01-13 00:02:25.275784 | orchestrator | 2026-01-13 00:02:25.275788 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-13 00:02:25.275792 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-13 00:02:25.275796 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-13 00:02:25.275800 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-13 00:02:25.275803 | orchestrator | + all_metadata = (known after apply) 2026-01-13 00:02:25.275807 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.275811 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.275815 | orchestrator | + config_drive = true 2026-01-13 00:02:25.275819 | orchestrator | + created = (known after apply) 2026-01-13 00:02:25.275823 | orchestrator | + flavor_id = (known after apply) 2026-01-13 00:02:25.275827 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-13 00:02:25.275831 | orchestrator | + force_delete = false 2026-01-13 00:02:25.275835 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-13 00:02:25.275840 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.275844 | orchestrator | + image_id = (known after apply) 2026-01-13 00:02:25.275848 | orchestrator | + image_name = (known after apply) 2026-01-13 00:02:25.275853 | orchestrator | + key_pair = "testbed" 2026-01-13 00:02:25.275857 | orchestrator | + name = "testbed-manager" 2026-01-13 00:02:25.275861 | orchestrator | + power_state = "active" 2026-01-13 00:02:25.275865 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.275869 | orchestrator | + security_groups = (known after apply) 2026-01-13 00:02:25.275874 | orchestrator | + stop_before_destroy = false 2026-01-13 00:02:25.275878 | orchestrator | + updated = (known after apply) 2026-01-13 00:02:25.275882 | orchestrator | + user_data = (sensitive value) 2026-01-13 00:02:25.275886 | orchestrator | 2026-01-13 00:02:25.275891 | orchestrator | + block_device { 2026-01-13 00:02:25.275895 | orchestrator | + boot_index = 0 2026-01-13 00:02:25.275899 | orchestrator | + delete_on_termination = false 2026-01-13 00:02:25.275905 | orchestrator | + destination_type = "volume" 2026-01-13 00:02:25.275910 | orchestrator | + multiattach = false 2026-01-13 00:02:25.275914 | orchestrator | + source_type = "volume" 2026-01-13 00:02:25.275918 | orchestrator | + uuid = (known after apply) 2026-01-13 00:02:25.275926 | orchestrator | } 2026-01-13 00:02:25.275930 | orchestrator | 2026-01-13 00:02:25.275934 | orchestrator | + network { 2026-01-13 00:02:25.275939 | orchestrator | + access_network = false 2026-01-13 00:02:25.275943 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-13 00:02:25.275947 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-13 00:02:25.275951 | orchestrator | + mac = (known after apply) 2026-01-13 00:02:25.275955 | orchestrator | + name = (known after apply) 2026-01-13 00:02:25.275959 | orchestrator | + port = (known after apply) 2026-01-13 00:02:25.275964 | orchestrator | + uuid = (known after apply) 2026-01-13 00:02:25.275968 | orchestrator | } 2026-01-13 00:02:25.275972 | orchestrator | } 2026-01-13 00:02:25.275978 | orchestrator | 2026-01-13 00:02:25.275983 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-13 00:02:25.275987 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-13 00:02:25.275991 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-13 00:02:25.275996 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-13 00:02:25.276000 | orchestrator | + all_metadata = (known after apply) 2026-01-13 00:02:25.276004 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.276009 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.276013 | orchestrator | + config_drive = true 2026-01-13 00:02:25.276017 | orchestrator | + created = (known after apply) 2026-01-13 00:02:25.276021 | orchestrator | + flavor_id = (known after apply) 2026-01-13 00:02:25.276025 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-13 00:02:25.276029 | orchestrator | + force_delete = false 2026-01-13 00:02:25.276033 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-13 00:02:25.276037 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.276041 | orchestrator | + image_id = (known after apply) 2026-01-13 00:02:25.276044 | orchestrator | + image_name = (known after apply) 2026-01-13 00:02:25.276048 | orchestrator | + key_pair = "testbed" 2026-01-13 00:02:25.276052 | orchestrator | + name = "testbed-node-0" 2026-01-13 00:02:25.276056 | orchestrator | + power_state = "active" 2026-01-13 00:02:25.276060 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.276064 | orchestrator | + security_groups = (known after apply) 2026-01-13 00:02:25.276068 | orchestrator | + stop_before_destroy = false 2026-01-13 00:02:25.276072 | orchestrator | + updated = (known after apply) 2026-01-13 00:02:25.276075 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-13 00:02:25.276079 | orchestrator | 2026-01-13 00:02:25.276083 | orchestrator | + block_device { 2026-01-13 00:02:25.276087 | orchestrator | + boot_index = 0 2026-01-13 00:02:25.276091 | orchestrator | + delete_on_termination = false 2026-01-13 00:02:25.276094 | orchestrator | + destination_type = "volume" 2026-01-13 00:02:25.276098 | orchestrator | + multiattach = false 2026-01-13 00:02:25.276102 | orchestrator | + source_type = "volume" 2026-01-13 00:02:25.276106 | orchestrator | + uuid = (known after apply) 2026-01-13 00:02:25.276110 | orchestrator | } 2026-01-13 00:02:25.276113 | orchestrator | 2026-01-13 00:02:25.276117 | orchestrator | + network { 2026-01-13 00:02:25.276121 | orchestrator | + access_network = false 2026-01-13 00:02:25.276125 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-13 00:02:25.276129 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-13 00:02:25.276132 | orchestrator | + mac = (known after apply) 2026-01-13 00:02:25.276136 | orchestrator | + name = (known after apply) 2026-01-13 00:02:25.276140 | orchestrator | + port = (known after apply) 2026-01-13 00:02:25.276144 | orchestrator | + uuid = (known after apply) 2026-01-13 00:02:25.276148 | orchestrator | } 2026-01-13 00:02:25.276152 | orchestrator | } 2026-01-13 00:02:25.276157 | orchestrator | 2026-01-13 00:02:25.276161 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-13 00:02:25.276165 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-13 00:02:25.276169 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-13 00:02:25.276175 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-13 00:02:25.276179 | orchestrator | + all_metadata = (known after apply) 2026-01-13 00:02:25.276183 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.276186 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.276190 | orchestrator | + config_drive = true 2026-01-13 00:02:25.276194 | orchestrator | + created = (known after apply) 2026-01-13 00:02:25.276198 | orchestrator | + flavor_id = (known after apply) 2026-01-13 00:02:25.276202 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-13 00:02:25.276206 | orchestrator | + force_delete = false 2026-01-13 00:02:25.276210 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-13 00:02:25.276213 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.276218 | orchestrator | + image_id = (known after apply) 2026-01-13 00:02:25.276222 | orchestrator | + image_name = (known after apply) 2026-01-13 00:02:25.276226 | orchestrator | + key_pair = "testbed" 2026-01-13 00:02:25.276230 | orchestrator | + name = "testbed-node-1" 2026-01-13 00:02:25.276235 | orchestrator | + power_state = "active" 2026-01-13 00:02:25.276239 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.276243 | orchestrator | + security_groups = (known after apply) 2026-01-13 00:02:25.276257 | orchestrator | + stop_before_destroy = false 2026-01-13 00:02:25.276261 | orchestrator | + updated = (known after apply) 2026-01-13 00:02:25.276265 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-13 00:02:25.276269 | orchestrator | 2026-01-13 00:02:25.276273 | orchestrator | + block_device { 2026-01-13 00:02:25.276277 | orchestrator | + boot_index = 0 2026-01-13 00:02:25.276281 | orchestrator | + delete_on_termination = false 2026-01-13 00:02:25.276286 | orchestrator | + destination_type = "volume" 2026-01-13 00:02:25.276290 | orchestrator | + multiattach = false 2026-01-13 00:02:25.276294 | orchestrator | + source_type = "volume" 2026-01-13 00:02:25.276298 | orchestrator | + uuid = (known after apply) 2026-01-13 00:02:25.276303 | orchestrator | } 2026-01-13 00:02:25.276307 | orchestrator | 2026-01-13 00:02:25.276311 | orchestrator | + network { 2026-01-13 00:02:25.276315 | orchestrator | + access_network = false 2026-01-13 00:02:25.276320 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-13 00:02:25.276324 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-13 00:02:25.276328 | orchestrator | + mac = (known after apply) 2026-01-13 00:02:25.276332 | orchestrator | + name = (known after apply) 2026-01-13 00:02:25.276336 | orchestrator | + port = (known after apply) 2026-01-13 00:02:25.276341 | orchestrator | + uuid = (known after apply) 2026-01-13 00:02:25.276345 | orchestrator | } 2026-01-13 00:02:25.276349 | orchestrator | } 2026-01-13 00:02:25.276355 | orchestrator | 2026-01-13 00:02:25.276360 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-13 00:02:25.276364 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-13 00:02:25.276368 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-13 00:02:25.276373 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-13 00:02:25.276377 | orchestrator | + all_metadata = (known after apply) 2026-01-13 00:02:25.276382 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.276391 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.276395 | orchestrator | + config_drive = true 2026-01-13 00:02:25.276400 | orchestrator | + created = (known after apply) 2026-01-13 00:02:25.276404 | orchestrator | + flavor_id = (known after apply) 2026-01-13 00:02:25.276408 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-13 00:02:25.276412 | orchestrator | + force_delete = false 2026-01-13 00:02:25.276416 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-13 00:02:25.276421 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.276425 | orchestrator | + image_id = (known after apply) 2026-01-13 00:02:25.276432 | orchestrator | + image_name = (known after apply) 2026-01-13 00:02:25.276436 | orchestrator | + key_pair = "testbed" 2026-01-13 00:02:25.276439 | orchestrator | + name = "testbed-node-2" 2026-01-13 00:02:25.276443 | orchestrator | + power_state = "active" 2026-01-13 00:02:25.276447 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.276451 | orchestrator | + security_groups = (known after apply) 2026-01-13 00:02:25.276455 | orchestrator | + stop_before_destroy = false 2026-01-13 00:02:25.276458 | orchestrator | + updated = (known after apply) 2026-01-13 00:02:25.276462 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-13 00:02:25.276466 | orchestrator | 2026-01-13 00:02:25.276470 | orchestrator | + block_device { 2026-01-13 00:02:25.276473 | orchestrator | + boot_index = 0 2026-01-13 00:02:25.276477 | orchestrator | + delete_on_termination = false 2026-01-13 00:02:25.276481 | orchestrator | + destination_type = "volume" 2026-01-13 00:02:25.276485 | orchestrator | + multiattach = false 2026-01-13 00:02:25.276488 | orchestrator | + source_type = "volume" 2026-01-13 00:02:25.276492 | orchestrator | + uuid = (known after apply) 2026-01-13 00:02:25.276496 | orchestrator | } 2026-01-13 00:02:25.276500 | orchestrator | 2026-01-13 00:02:25.276504 | orchestrator | + network { 2026-01-13 00:02:25.276507 | orchestrator | + access_network = false 2026-01-13 00:02:25.276511 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-13 00:02:25.276515 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-13 00:02:25.276519 | orchestrator | + mac = (known after apply) 2026-01-13 00:02:25.276523 | orchestrator | + name = (known after apply) 2026-01-13 00:02:25.276526 | orchestrator | + port = (known after apply) 2026-01-13 00:02:25.276530 | orchestrator | + uuid = (known after apply) 2026-01-13 00:02:25.276534 | orchestrator | } 2026-01-13 00:02:25.276538 | orchestrator | } 2026-01-13 00:02:25.276543 | orchestrator | 2026-01-13 00:02:25.276547 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-13 00:02:25.276551 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-13 00:02:25.276555 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-13 00:02:25.276559 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-13 00:02:25.276562 | orchestrator | + all_metadata = (known after apply) 2026-01-13 00:02:25.276566 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.276570 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.276574 | orchestrator | + config_drive = true 2026-01-13 00:02:25.276577 | orchestrator | + created = (known after apply) 2026-01-13 00:02:25.276581 | orchestrator | + flavor_id = (known after apply) 2026-01-13 00:02:25.276585 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-13 00:02:25.276589 | orchestrator | + force_delete = false 2026-01-13 00:02:25.276593 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-13 00:02:25.276596 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.276600 | orchestrator | + image_id = (known after apply) 2026-01-13 00:02:25.276604 | orchestrator | + image_name = (known after apply) 2026-01-13 00:02:25.276608 | orchestrator | + key_pair = "testbed" 2026-01-13 00:02:25.276612 | orchestrator | + name = "testbed-node-3" 2026-01-13 00:02:25.276616 | orchestrator | + power_state = "active" 2026-01-13 00:02:25.276620 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.276624 | orchestrator | + security_groups = (known after apply) 2026-01-13 00:02:25.276629 | orchestrator | + stop_before_destroy = false 2026-01-13 00:02:25.276633 | orchestrator | + updated = (known after apply) 2026-01-13 00:02:25.276637 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-13 00:02:25.276641 | orchestrator | 2026-01-13 00:02:25.276646 | orchestrator | + block_device { 2026-01-13 00:02:25.276652 | orchestrator | + boot_index = 0 2026-01-13 00:02:25.276657 | orchestrator | + delete_on_termination = false 2026-01-13 00:02:25.276661 | orchestrator | + destination_type = "volume" 2026-01-13 00:02:25.276669 | orchestrator | + multiattach = false 2026-01-13 00:02:25.276673 | orchestrator | + source_type = "volume" 2026-01-13 00:02:25.276677 | orchestrator | + uuid = (known after apply) 2026-01-13 00:02:25.276682 | orchestrator | } 2026-01-13 00:02:25.276686 | orchestrator | 2026-01-13 00:02:25.276690 | orchestrator | + network { 2026-01-13 00:02:25.276694 | orchestrator | + access_network = false 2026-01-13 00:02:25.276698 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-13 00:02:25.276703 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-13 00:02:25.276707 | orchestrator | + mac = (known after apply) 2026-01-13 00:02:25.276711 | orchestrator | + name = (known after apply) 2026-01-13 00:02:25.276715 | orchestrator | + port = (known after apply) 2026-01-13 00:02:25.276719 | orchestrator | + uuid = (known after apply) 2026-01-13 00:02:25.276724 | orchestrator | } 2026-01-13 00:02:25.276728 | orchestrator | } 2026-01-13 00:02:25.276734 | orchestrator | 2026-01-13 00:02:25.276738 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-13 00:02:25.276742 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-13 00:02:25.276747 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-13 00:02:25.276751 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-13 00:02:25.276755 | orchestrator | + all_metadata = (known after apply) 2026-01-13 00:02:25.276759 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.276764 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.276768 | orchestrator | + config_drive = true 2026-01-13 00:02:25.276772 | orchestrator | + created = (known after apply) 2026-01-13 00:02:25.276776 | orchestrator | + flavor_id = (known after apply) 2026-01-13 00:02:25.276781 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-13 00:02:25.276785 | orchestrator | + force_delete = false 2026-01-13 00:02:25.276789 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-13 00:02:25.276793 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.276797 | orchestrator | + image_id = (known after apply) 2026-01-13 00:02:25.276802 | orchestrator | + image_name = (known after apply) 2026-01-13 00:02:25.276806 | orchestrator | + key_pair = "testbed" 2026-01-13 00:02:25.276810 | orchestrator | + name = "testbed-node-4" 2026-01-13 00:02:25.276814 | orchestrator | + power_state = "active" 2026-01-13 00:02:25.276817 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.276821 | orchestrator | + security_groups = (known after apply) 2026-01-13 00:02:25.276825 | orchestrator | + stop_before_destroy = false 2026-01-13 00:02:25.276829 | orchestrator | + updated = (known after apply) 2026-01-13 00:02:25.276832 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-13 00:02:25.276836 | orchestrator | 2026-01-13 00:02:25.276840 | orchestrator | + block_device { 2026-01-13 00:02:25.276844 | orchestrator | + boot_index = 0 2026-01-13 00:02:25.276847 | orchestrator | + delete_on_termination = false 2026-01-13 00:02:25.276851 | orchestrator | + destination_type = "volume" 2026-01-13 00:02:25.276855 | orchestrator | + multiattach = false 2026-01-13 00:02:25.276859 | orchestrator | + source_type = "volume" 2026-01-13 00:02:25.276863 | orchestrator | + uuid = (known after apply) 2026-01-13 00:02:25.276866 | orchestrator | } 2026-01-13 00:02:25.276870 | orchestrator | 2026-01-13 00:02:25.276874 | orchestrator | + network { 2026-01-13 00:02:25.276878 | orchestrator | + access_network = false 2026-01-13 00:02:25.276881 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-13 00:02:25.276885 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-13 00:02:25.276889 | orchestrator | + mac = (known after apply) 2026-01-13 00:02:25.276893 | orchestrator | + name = (known after apply) 2026-01-13 00:02:25.276896 | orchestrator | + port = (known after apply) 2026-01-13 00:02:25.276900 | orchestrator | + uuid = (known after apply) 2026-01-13 00:02:25.276904 | orchestrator | } 2026-01-13 00:02:25.276908 | orchestrator | } 2026-01-13 00:02:25.276917 | orchestrator | 2026-01-13 00:02:25.276921 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-13 00:02:25.276925 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-13 00:02:25.276928 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-13 00:02:25.276932 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-13 00:02:25.276936 | orchestrator | + all_metadata = (known after apply) 2026-01-13 00:02:25.276940 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.276944 | orchestrator | + availability_zone = "nova" 2026-01-13 00:02:25.276948 | orchestrator | + config_drive = true 2026-01-13 00:02:25.276952 | orchestrator | + created = (known after apply) 2026-01-13 00:02:25.276955 | orchestrator | + flavor_id = (known after apply) 2026-01-13 00:02:25.276959 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-13 00:02:25.276963 | orchestrator | + force_delete = false 2026-01-13 00:02:25.276970 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-13 00:02:25.276973 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.276978 | orchestrator | + image_id = (known after apply) 2026-01-13 00:02:25.276981 | orchestrator | + image_name = (known after apply) 2026-01-13 00:02:25.276985 | orchestrator | + key_pair = "testbed" 2026-01-13 00:02:25.276989 | orchestrator | + name = "testbed-node-5" 2026-01-13 00:02:25.276994 | orchestrator | + power_state = "active" 2026-01-13 00:02:25.276998 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277002 | orchestrator | + security_groups = (known after apply) 2026-01-13 00:02:25.277006 | orchestrator | + stop_before_destroy = false 2026-01-13 00:02:25.277011 | orchestrator | + updated = (known after apply) 2026-01-13 00:02:25.277014 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-13 00:02:25.277019 | orchestrator | 2026-01-13 00:02:25.277023 | orchestrator | + block_device { 2026-01-13 00:02:25.277027 | orchestrator | + boot_index = 0 2026-01-13 00:02:25.277032 | orchestrator | + delete_on_termination = false 2026-01-13 00:02:25.277036 | orchestrator | + destination_type = "volume" 2026-01-13 00:02:25.277040 | orchestrator | + multiattach = false 2026-01-13 00:02:25.277044 | orchestrator | + source_type = "volume" 2026-01-13 00:02:25.277048 | orchestrator | + uuid = (known after apply) 2026-01-13 00:02:25.277053 | orchestrator | } 2026-01-13 00:02:25.277057 | orchestrator | 2026-01-13 00:02:25.277061 | orchestrator | + network { 2026-01-13 00:02:25.277066 | orchestrator | + access_network = false 2026-01-13 00:02:25.277070 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-13 00:02:25.277074 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-13 00:02:25.277078 | orchestrator | + mac = (known after apply) 2026-01-13 00:02:25.277082 | orchestrator | + name = (known after apply) 2026-01-13 00:02:25.277087 | orchestrator | + port = (known after apply) 2026-01-13 00:02:25.277091 | orchestrator | + uuid = (known after apply) 2026-01-13 00:02:25.277095 | orchestrator | } 2026-01-13 00:02:25.277099 | orchestrator | } 2026-01-13 00:02:25.277104 | orchestrator | 2026-01-13 00:02:25.277108 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-13 00:02:25.277112 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-13 00:02:25.277117 | orchestrator | + fingerprint = (known after apply) 2026-01-13 00:02:25.277121 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277125 | orchestrator | + name = "testbed" 2026-01-13 00:02:25.277129 | orchestrator | + private_key = (sensitive value) 2026-01-13 00:02:25.277133 | orchestrator | + public_key = (known after apply) 2026-01-13 00:02:25.277138 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277142 | orchestrator | + user_id = (known after apply) 2026-01-13 00:02:25.277146 | orchestrator | } 2026-01-13 00:02:25.277150 | orchestrator | 2026-01-13 00:02:25.277154 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-13 00:02:25.277158 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-13 00:02:25.277166 | orchestrator | + device = (known after apply) 2026-01-13 00:02:25.277171 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277175 | orchestrator | + instance_id = (known after apply) 2026-01-13 00:02:25.277179 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277183 | orchestrator | + volume_id = (known after apply) 2026-01-13 00:02:25.277187 | orchestrator | } 2026-01-13 00:02:25.277191 | orchestrator | 2026-01-13 00:02:25.277195 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-13 00:02:25.277199 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-13 00:02:25.277203 | orchestrator | + device = (known after apply) 2026-01-13 00:02:25.277207 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277210 | orchestrator | + instance_id = (known after apply) 2026-01-13 00:02:25.277214 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277218 | orchestrator | + volume_id = (known after apply) 2026-01-13 00:02:25.277221 | orchestrator | } 2026-01-13 00:02:25.277225 | orchestrator | 2026-01-13 00:02:25.277229 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-13 00:02:25.277233 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-13 00:02:25.277237 | orchestrator | + device = (known after apply) 2026-01-13 00:02:25.277240 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277278 | orchestrator | + instance_id = (known after apply) 2026-01-13 00:02:25.277283 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277287 | orchestrator | + volume_id = (known after apply) 2026-01-13 00:02:25.277291 | orchestrator | } 2026-01-13 00:02:25.277295 | orchestrator | 2026-01-13 00:02:25.277299 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-13 00:02:25.277302 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-13 00:02:25.277306 | orchestrator | + device = (known after apply) 2026-01-13 00:02:25.277310 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277314 | orchestrator | + instance_id = (known after apply) 2026-01-13 00:02:25.277317 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277321 | orchestrator | + volume_id = (known after apply) 2026-01-13 00:02:25.277325 | orchestrator | } 2026-01-13 00:02:25.277329 | orchestrator | 2026-01-13 00:02:25.277332 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-13 00:02:25.277336 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-13 00:02:25.277340 | orchestrator | + device = (known after apply) 2026-01-13 00:02:25.277344 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277348 | orchestrator | + instance_id = (known after apply) 2026-01-13 00:02:25.277354 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277358 | orchestrator | + volume_id = (known after apply) 2026-01-13 00:02:25.277365 | orchestrator | } 2026-01-13 00:02:25.277369 | orchestrator | 2026-01-13 00:02:25.277373 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-13 00:02:25.277377 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-13 00:02:25.277381 | orchestrator | + device = (known after apply) 2026-01-13 00:02:25.277385 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277389 | orchestrator | + instance_id = (known after apply) 2026-01-13 00:02:25.277393 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277397 | orchestrator | + volume_id = (known after apply) 2026-01-13 00:02:25.277401 | orchestrator | } 2026-01-13 00:02:25.277406 | orchestrator | 2026-01-13 00:02:25.277410 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-13 00:02:25.277414 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-13 00:02:25.277418 | orchestrator | + device = (known after apply) 2026-01-13 00:02:25.277423 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277427 | orchestrator | + instance_id = (known after apply) 2026-01-13 00:02:25.277431 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277440 | orchestrator | + volume_id = (known after apply) 2026-01-13 00:02:25.277444 | orchestrator | } 2026-01-13 00:02:25.277448 | orchestrator | 2026-01-13 00:02:25.277452 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-13 00:02:25.277457 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-13 00:02:25.277461 | orchestrator | + device = (known after apply) 2026-01-13 00:02:25.277465 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277469 | orchestrator | + instance_id = (known after apply) 2026-01-13 00:02:25.277473 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277478 | orchestrator | + volume_id = (known after apply) 2026-01-13 00:02:25.277482 | orchestrator | } 2026-01-13 00:02:25.277486 | orchestrator | 2026-01-13 00:02:25.277490 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-13 00:02:25.277495 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-13 00:02:25.277499 | orchestrator | + device = (known after apply) 2026-01-13 00:02:25.277503 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277506 | orchestrator | + instance_id = (known after apply) 2026-01-13 00:02:25.277510 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277515 | orchestrator | + volume_id = (known after apply) 2026-01-13 00:02:25.277519 | orchestrator | } 2026-01-13 00:02:25.277523 | orchestrator | 2026-01-13 00:02:25.277527 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-13 00:02:25.277533 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-13 00:02:25.277537 | orchestrator | + fixed_ip = (known after apply) 2026-01-13 00:02:25.277541 | orchestrator | + floating_ip = (known after apply) 2026-01-13 00:02:25.277545 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277550 | orchestrator | + port_id = (known after apply) 2026-01-13 00:02:25.277554 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277558 | orchestrator | } 2026-01-13 00:02:25.277563 | orchestrator | 2026-01-13 00:02:25.277567 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-13 00:02:25.277571 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-13 00:02:25.277575 | orchestrator | + address = (known after apply) 2026-01-13 00:02:25.277580 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.277584 | orchestrator | + dns_domain = (known after apply) 2026-01-13 00:02:25.277588 | orchestrator | + dns_name = (known after apply) 2026-01-13 00:02:25.277592 | orchestrator | + fixed_ip = (known after apply) 2026-01-13 00:02:25.277596 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277600 | orchestrator | + pool = "public" 2026-01-13 00:02:25.277604 | orchestrator | + port_id = (known after apply) 2026-01-13 00:02:25.277607 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277611 | orchestrator | + subnet_id = (known after apply) 2026-01-13 00:02:25.277615 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.277619 | orchestrator | } 2026-01-13 00:02:25.277623 | orchestrator | 2026-01-13 00:02:25.277627 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-13 00:02:25.277631 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-13 00:02:25.277634 | orchestrator | + admin_state_up = (known after apply) 2026-01-13 00:02:25.277638 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.277642 | orchestrator | + availability_zone_hints = [ 2026-01-13 00:02:25.277646 | orchestrator | + "nova", 2026-01-13 00:02:25.277650 | orchestrator | ] 2026-01-13 00:02:25.277654 | orchestrator | + dns_domain = (known after apply) 2026-01-13 00:02:25.277658 | orchestrator | + external = (known after apply) 2026-01-13 00:02:25.277662 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277666 | orchestrator | + mtu = (known after apply) 2026-01-13 00:02:25.277669 | orchestrator | + name = "net-testbed-management" 2026-01-13 00:02:25.277673 | orchestrator | + port_security_enabled = (known after apply) 2026-01-13 00:02:25.277680 | orchestrator | + qos_policy_id = (known after apply) 2026-01-13 00:02:25.277684 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277688 | orchestrator | + shared = (known after apply) 2026-01-13 00:02:25.277692 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.277695 | orchestrator | + transparent_vlan = (known after apply) 2026-01-13 00:02:25.277699 | orchestrator | 2026-01-13 00:02:25.277703 | orchestrator | + segments (known after apply) 2026-01-13 00:02:25.277707 | orchestrator | } 2026-01-13 00:02:25.277711 | orchestrator | 2026-01-13 00:02:25.277715 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-13 00:02:25.277719 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-13 00:02:25.277722 | orchestrator | + admin_state_up = (known after apply) 2026-01-13 00:02:25.277726 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-13 00:02:25.277730 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-13 00:02:25.277736 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.277740 | orchestrator | + device_id = (known after apply) 2026-01-13 00:02:25.277744 | orchestrator | + device_owner = (known after apply) 2026-01-13 00:02:25.277748 | orchestrator | + dns_assignment = (known after apply) 2026-01-13 00:02:25.277751 | orchestrator | + dns_name = (known after apply) 2026-01-13 00:02:25.277755 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277762 | orchestrator | + mac_address = (known after apply) 2026-01-13 00:02:25.277766 | orchestrator | + network_id = (known after apply) 2026-01-13 00:02:25.277769 | orchestrator | + port_security_enabled = (known after apply) 2026-01-13 00:02:25.277773 | orchestrator | + qos_policy_id = (known after apply) 2026-01-13 00:02:25.277777 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277781 | orchestrator | + security_group_ids = (known after apply) 2026-01-13 00:02:25.277785 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.277789 | orchestrator | 2026-01-13 00:02:25.277793 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.277798 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-13 00:02:25.277802 | orchestrator | } 2026-01-13 00:02:25.277807 | orchestrator | 2026-01-13 00:02:25.277811 | orchestrator | + binding (known after apply) 2026-01-13 00:02:25.277815 | orchestrator | 2026-01-13 00:02:25.277820 | orchestrator | + fixed_ip { 2026-01-13 00:02:25.277824 | orchestrator | + ip_address = "192.168.16.5" 2026-01-13 00:02:25.277828 | orchestrator | + subnet_id = (known after apply) 2026-01-13 00:02:25.277833 | orchestrator | } 2026-01-13 00:02:25.277837 | orchestrator | } 2026-01-13 00:02:25.277842 | orchestrator | 2026-01-13 00:02:25.277846 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-13 00:02:25.277850 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-13 00:02:25.277855 | orchestrator | + admin_state_up = (known after apply) 2026-01-13 00:02:25.277859 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-13 00:02:25.277863 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-13 00:02:25.277868 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.277872 | orchestrator | + device_id = (known after apply) 2026-01-13 00:02:25.277876 | orchestrator | + device_owner = (known after apply) 2026-01-13 00:02:25.277880 | orchestrator | + dns_assignment = (known after apply) 2026-01-13 00:02:25.277885 | orchestrator | + dns_name = (known after apply) 2026-01-13 00:02:25.277889 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.277893 | orchestrator | + mac_address = (known after apply) 2026-01-13 00:02:25.277897 | orchestrator | + network_id = (known after apply) 2026-01-13 00:02:25.277902 | orchestrator | + port_security_enabled = (known after apply) 2026-01-13 00:02:25.277906 | orchestrator | + qos_policy_id = (known after apply) 2026-01-13 00:02:25.277911 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.277918 | orchestrator | + security_group_ids = (known after apply) 2026-01-13 00:02:25.277922 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.277926 | orchestrator | 2026-01-13 00:02:25.277931 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.277935 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-13 00:02:25.277939 | orchestrator | } 2026-01-13 00:02:25.277944 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.277948 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-13 00:02:25.277953 | orchestrator | } 2026-01-13 00:02:25.277957 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.277961 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-13 00:02:25.277966 | orchestrator | } 2026-01-13 00:02:25.277970 | orchestrator | 2026-01-13 00:02:25.277974 | orchestrator | + binding (known after apply) 2026-01-13 00:02:25.277978 | orchestrator | 2026-01-13 00:02:25.277983 | orchestrator | + fixed_ip { 2026-01-13 00:02:25.277987 | orchestrator | + ip_address = "192.168.16.10" 2026-01-13 00:02:25.277990 | orchestrator | + subnet_id = (known after apply) 2026-01-13 00:02:25.277994 | orchestrator | } 2026-01-13 00:02:25.277998 | orchestrator | } 2026-01-13 00:02:25.278002 | orchestrator | 2026-01-13 00:02:25.278006 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-13 00:02:25.278009 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-13 00:02:25.278027 | orchestrator | + admin_state_up = (known after apply) 2026-01-13 00:02:25.278032 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-13 00:02:25.278036 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-13 00:02:25.278039 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.278043 | orchestrator | + device_id = (known after apply) 2026-01-13 00:02:25.278047 | orchestrator | + device_owner = (known after apply) 2026-01-13 00:02:25.278051 | orchestrator | + dns_assignment = (known after apply) 2026-01-13 00:02:25.278055 | orchestrator | + dns_name = (known after apply) 2026-01-13 00:02:25.278058 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.278062 | orchestrator | + mac_address = (known after apply) 2026-01-13 00:02:25.278066 | orchestrator | + network_id = (known after apply) 2026-01-13 00:02:25.278070 | orchestrator | + port_security_enabled = (known after apply) 2026-01-13 00:02:25.278074 | orchestrator | + qos_policy_id = (known after apply) 2026-01-13 00:02:25.278078 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.278081 | orchestrator | + security_group_ids = (known after apply) 2026-01-13 00:02:25.278085 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.278089 | orchestrator | 2026-01-13 00:02:25.278093 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278097 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-13 00:02:25.278101 | orchestrator | } 2026-01-13 00:02:25.278105 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278109 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-13 00:02:25.278113 | orchestrator | } 2026-01-13 00:02:25.278116 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278120 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-13 00:02:25.278124 | orchestrator | } 2026-01-13 00:02:25.278128 | orchestrator | 2026-01-13 00:02:25.278132 | orchestrator | + binding (known after apply) 2026-01-13 00:02:25.278135 | orchestrator | 2026-01-13 00:02:25.278139 | orchestrator | + fixed_ip { 2026-01-13 00:02:25.278143 | orchestrator | + ip_address = "192.168.16.11" 2026-01-13 00:02:25.278147 | orchestrator | + subnet_id = (known after apply) 2026-01-13 00:02:25.278151 | orchestrator | } 2026-01-13 00:02:25.278154 | orchestrator | } 2026-01-13 00:02:25.278158 | orchestrator | 2026-01-13 00:02:25.278162 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-13 00:02:25.278166 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-13 00:02:25.278170 | orchestrator | + admin_state_up = (known after apply) 2026-01-13 00:02:25.278174 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-13 00:02:25.278178 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-13 00:02:25.278182 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.278189 | orchestrator | + device_id = (known after apply) 2026-01-13 00:02:25.278193 | orchestrator | + device_owner = (known after apply) 2026-01-13 00:02:25.278197 | orchestrator | + dns_assignment = (known after apply) 2026-01-13 00:02:25.278201 | orchestrator | + dns_name = (known after apply) 2026-01-13 00:02:25.278208 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.278213 | orchestrator | + mac_address = (known after apply) 2026-01-13 00:02:25.278220 | orchestrator | + network_id = (known after apply) 2026-01-13 00:02:25.278224 | orchestrator | + port_security_enabled = (known after apply) 2026-01-13 00:02:25.278228 | orchestrator | + qos_policy_id = (known after apply) 2026-01-13 00:02:25.278232 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.278237 | orchestrator | + security_group_ids = (known after apply) 2026-01-13 00:02:25.278240 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.278254 | orchestrator | 2026-01-13 00:02:25.278259 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278263 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-13 00:02:25.278268 | orchestrator | } 2026-01-13 00:02:25.278272 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278276 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-13 00:02:25.278280 | orchestrator | } 2026-01-13 00:02:25.278285 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278289 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-13 00:02:25.278293 | orchestrator | } 2026-01-13 00:02:25.278297 | orchestrator | 2026-01-13 00:02:25.278301 | orchestrator | + binding (known after apply) 2026-01-13 00:02:25.278306 | orchestrator | 2026-01-13 00:02:25.278310 | orchestrator | + fixed_ip { 2026-01-13 00:02:25.278314 | orchestrator | + ip_address = "192.168.16.12" 2026-01-13 00:02:25.278318 | orchestrator | + subnet_id = (known after apply) 2026-01-13 00:02:25.278323 | orchestrator | } 2026-01-13 00:02:25.278327 | orchestrator | } 2026-01-13 00:02:25.278331 | orchestrator | 2026-01-13 00:02:25.278335 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-13 00:02:25.278339 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-13 00:02:25.278344 | orchestrator | + admin_state_up = (known after apply) 2026-01-13 00:02:25.278348 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-13 00:02:25.278352 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-13 00:02:25.278356 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.278361 | orchestrator | + device_id = (known after apply) 2026-01-13 00:02:25.278365 | orchestrator | + device_owner = (known after apply) 2026-01-13 00:02:25.278369 | orchestrator | + dns_assignment = (known after apply) 2026-01-13 00:02:25.278373 | orchestrator | + dns_name = (known after apply) 2026-01-13 00:02:25.278377 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.278381 | orchestrator | + mac_address = (known after apply) 2026-01-13 00:02:25.278385 | orchestrator | + network_id = (known after apply) 2026-01-13 00:02:25.278389 | orchestrator | + port_security_enabled = (known after apply) 2026-01-13 00:02:25.278393 | orchestrator | + qos_policy_id = (known after apply) 2026-01-13 00:02:25.278397 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.278401 | orchestrator | + security_group_ids = (known after apply) 2026-01-13 00:02:25.278404 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.278408 | orchestrator | 2026-01-13 00:02:25.278412 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278416 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-13 00:02:25.278420 | orchestrator | } 2026-01-13 00:02:25.278423 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278428 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-13 00:02:25.278431 | orchestrator | } 2026-01-13 00:02:25.278435 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278439 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-13 00:02:25.278443 | orchestrator | } 2026-01-13 00:02:25.278448 | orchestrator | 2026-01-13 00:02:25.278456 | orchestrator | + binding (known after apply) 2026-01-13 00:02:25.278460 | orchestrator | 2026-01-13 00:02:25.278464 | orchestrator | + fixed_ip { 2026-01-13 00:02:25.278468 | orchestrator | + ip_address = "192.168.16.13" 2026-01-13 00:02:25.278473 | orchestrator | + subnet_id = (known after apply) 2026-01-13 00:02:25.278477 | orchestrator | } 2026-01-13 00:02:25.278481 | orchestrator | } 2026-01-13 00:02:25.278485 | orchestrator | 2026-01-13 00:02:25.278488 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-13 00:02:25.278492 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-13 00:02:25.278496 | orchestrator | + admin_state_up = (known after apply) 2026-01-13 00:02:25.278500 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-13 00:02:25.278504 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-13 00:02:25.278507 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.278511 | orchestrator | + device_id = (known after apply) 2026-01-13 00:02:25.278515 | orchestrator | + device_owner = (known after apply) 2026-01-13 00:02:25.278519 | orchestrator | + dns_assignment = (known after apply) 2026-01-13 00:02:25.278523 | orchestrator | + dns_name = (known after apply) 2026-01-13 00:02:25.278527 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.278532 | orchestrator | + mac_address = (known after apply) 2026-01-13 00:02:25.278536 | orchestrator | + network_id = (known after apply) 2026-01-13 00:02:25.278540 | orchestrator | + port_security_enabled = (known after apply) 2026-01-13 00:02:25.278545 | orchestrator | + qos_policy_id = (known after apply) 2026-01-13 00:02:25.278549 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.278553 | orchestrator | + security_group_ids = (known after apply) 2026-01-13 00:02:25.278557 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.278562 | orchestrator | 2026-01-13 00:02:25.278566 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278570 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-13 00:02:25.278574 | orchestrator | } 2026-01-13 00:02:25.278577 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278581 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-13 00:02:25.278585 | orchestrator | } 2026-01-13 00:02:25.278589 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278593 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-13 00:02:25.278596 | orchestrator | } 2026-01-13 00:02:25.278601 | orchestrator | 2026-01-13 00:02:25.278604 | orchestrator | + binding (known after apply) 2026-01-13 00:02:25.278609 | orchestrator | 2026-01-13 00:02:25.278613 | orchestrator | + fixed_ip { 2026-01-13 00:02:25.278617 | orchestrator | + ip_address = "192.168.16.14" 2026-01-13 00:02:25.278622 | orchestrator | + subnet_id = (known after apply) 2026-01-13 00:02:25.278626 | orchestrator | } 2026-01-13 00:02:25.278630 | orchestrator | } 2026-01-13 00:02:25.278634 | orchestrator | 2026-01-13 00:02:25.278639 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-13 00:02:25.278643 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-13 00:02:25.278647 | orchestrator | + admin_state_up = (known after apply) 2026-01-13 00:02:25.278651 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-13 00:02:25.278654 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-13 00:02:25.278658 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.278662 | orchestrator | + device_id = (known after apply) 2026-01-13 00:02:25.278666 | orchestrator | + device_owner = (known after apply) 2026-01-13 00:02:25.278670 | orchestrator | + dns_assignment = (known after apply) 2026-01-13 00:02:25.278677 | orchestrator | + dns_name = (known after apply) 2026-01-13 00:02:25.278680 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.278685 | orchestrator | + mac_address = (known after apply) 2026-01-13 00:02:25.278689 | orchestrator | + network_id = (known after apply) 2026-01-13 00:02:25.278694 | orchestrator | + port_security_enabled = (known after apply) 2026-01-13 00:02:25.278698 | orchestrator | + qos_policy_id = (known after apply) 2026-01-13 00:02:25.278706 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.278710 | orchestrator | + security_group_ids = (known after apply) 2026-01-13 00:02:25.278715 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.278718 | orchestrator | 2026-01-13 00:02:25.278722 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278726 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-13 00:02:25.278730 | orchestrator | } 2026-01-13 00:02:25.278734 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278737 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-13 00:02:25.278741 | orchestrator | } 2026-01-13 00:02:25.278745 | orchestrator | + allowed_address_pairs { 2026-01-13 00:02:25.278749 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-13 00:02:25.278753 | orchestrator | } 2026-01-13 00:02:25.278757 | orchestrator | 2026-01-13 00:02:25.278763 | orchestrator | + binding (known after apply) 2026-01-13 00:02:25.278767 | orchestrator | 2026-01-13 00:02:25.278771 | orchestrator | + fixed_ip { 2026-01-13 00:02:25.278776 | orchestrator | + ip_address = "192.168.16.15" 2026-01-13 00:02:25.278783 | orchestrator | + subnet_id = (known after apply) 2026-01-13 00:02:25.278789 | orchestrator | } 2026-01-13 00:02:25.278795 | orchestrator | } 2026-01-13 00:02:25.278800 | orchestrator | 2026-01-13 00:02:25.278803 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-13 00:02:25.278807 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-13 00:02:25.278811 | orchestrator | + force_destroy = false 2026-01-13 00:02:25.278815 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.278819 | orchestrator | + port_id = (known after apply) 2026-01-13 00:02:25.278823 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.278826 | orchestrator | + router_id = (known after apply) 2026-01-13 00:02:25.278830 | orchestrator | + subnet_id = (known after apply) 2026-01-13 00:02:25.278835 | orchestrator | } 2026-01-13 00:02:25.278839 | orchestrator | 2026-01-13 00:02:25.278843 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-13 00:02:25.278847 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-13 00:02:25.278852 | orchestrator | + admin_state_up = (known after apply) 2026-01-13 00:02:25.278856 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.278860 | orchestrator | + availability_zone_hints = [ 2026-01-13 00:02:25.278864 | orchestrator | + "nova", 2026-01-13 00:02:25.278868 | orchestrator | ] 2026-01-13 00:02:25.278872 | orchestrator | + distributed = (known after apply) 2026-01-13 00:02:25.278876 | orchestrator | + enable_snat = (known after apply) 2026-01-13 00:02:25.278879 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-13 00:02:25.278883 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-13 00:02:25.278887 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.278891 | orchestrator | + name = "testbed" 2026-01-13 00:02:25.278895 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.278899 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.278903 | orchestrator | 2026-01-13 00:02:25.278907 | orchestrator | + external_fixed_ip (known after apply) 2026-01-13 00:02:25.278911 | orchestrator | } 2026-01-13 00:02:25.278915 | orchestrator | 2026-01-13 00:02:25.278919 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-13 00:02:25.278924 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-13 00:02:25.278928 | orchestrator | + description = "ssh" 2026-01-13 00:02:25.278932 | orchestrator | + direction = "ingress" 2026-01-13 00:02:25.278936 | orchestrator | + ethertype = "IPv4" 2026-01-13 00:02:25.278940 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.278944 | orchestrator | + port_range_max = 22 2026-01-13 00:02:25.278948 | orchestrator | + port_range_min = 22 2026-01-13 00:02:25.278951 | orchestrator | + protocol = "tcp" 2026-01-13 00:02:25.278955 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.278963 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-13 00:02:25.278966 | orchestrator | + remote_group_id = (known after apply) 2026-01-13 00:02:25.278970 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-13 00:02:25.278974 | orchestrator | + security_group_id = (known after apply) 2026-01-13 00:02:25.278978 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.278982 | orchestrator | } 2026-01-13 00:02:25.278987 | orchestrator | 2026-01-13 00:02:25.278991 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-13 00:02:25.278995 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-13 00:02:25.278999 | orchestrator | + description = "wireguard" 2026-01-13 00:02:25.279012 | orchestrator | + direction = "ingress" 2026-01-13 00:02:25.279016 | orchestrator | + ethertype = "IPv4" 2026-01-13 00:02:25.279020 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.279024 | orchestrator | + port_range_max = 51820 2026-01-13 00:02:25.279028 | orchestrator | + port_range_min = 51820 2026-01-13 00:02:25.279032 | orchestrator | + protocol = "udp" 2026-01-13 00:02:25.279036 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.279040 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-13 00:02:25.279044 | orchestrator | + remote_group_id = (known after apply) 2026-01-13 00:02:25.279048 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-13 00:02:25.279051 | orchestrator | + security_group_id = (known after apply) 2026-01-13 00:02:25.279055 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.279059 | orchestrator | } 2026-01-13 00:02:25.279063 | orchestrator | 2026-01-13 00:02:25.279066 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-13 00:02:25.279071 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-13 00:02:25.279074 | orchestrator | + direction = "ingress" 2026-01-13 00:02:25.279079 | orchestrator | + ethertype = "IPv4" 2026-01-13 00:02:25.279083 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.279087 | orchestrator | + protocol = "tcp" 2026-01-13 00:02:25.279091 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.279098 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-13 00:02:25.279103 | orchestrator | + remote_group_id = (known after apply) 2026-01-13 00:02:25.279107 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-13 00:02:25.279111 | orchestrator | + security_group_id = (known after apply) 2026-01-13 00:02:25.279115 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.279119 | orchestrator | } 2026-01-13 00:02:25.279123 | orchestrator | 2026-01-13 00:02:25.279127 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-13 00:02:25.280102 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-13 00:02:25.280127 | orchestrator | + direction = "ingress" 2026-01-13 00:02:25.280132 | orchestrator | + ethertype = "IPv4" 2026-01-13 00:02:25.280137 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.280141 | orchestrator | + protocol = "udp" 2026-01-13 00:02:25.280146 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.280150 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-13 00:02:25.280153 | orchestrator | + remote_group_id = (known after apply) 2026-01-13 00:02:25.280157 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-13 00:02:25.280161 | orchestrator | + security_group_id = (known after apply) 2026-01-13 00:02:25.280165 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.280169 | orchestrator | } 2026-01-13 00:02:25.280173 | orchestrator | 2026-01-13 00:02:25.280177 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-13 00:02:25.280190 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-13 00:02:25.280194 | orchestrator | + direction = "ingress" 2026-01-13 00:02:25.280198 | orchestrator | + ethertype = "IPv4" 2026-01-13 00:02:25.280202 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.280206 | orchestrator | + protocol = "icmp" 2026-01-13 00:02:25.280210 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.280213 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-13 00:02:25.280217 | orchestrator | + remote_group_id = (known after apply) 2026-01-13 00:02:25.280221 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-13 00:02:25.280225 | orchestrator | + security_group_id = (known after apply) 2026-01-13 00:02:25.280228 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.280232 | orchestrator | } 2026-01-13 00:02:25.280236 | orchestrator | 2026-01-13 00:02:25.280240 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-13 00:02:25.280257 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-13 00:02:25.280262 | orchestrator | + direction = "ingress" 2026-01-13 00:02:25.280266 | orchestrator | + ethertype = "IPv4" 2026-01-13 00:02:25.280269 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.280273 | orchestrator | + protocol = "tcp" 2026-01-13 00:02:25.280277 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.280281 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-13 00:02:25.280290 | orchestrator | + remote_group_id = (known after apply) 2026-01-13 00:02:25.280294 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-13 00:02:25.280298 | orchestrator | + security_group_id = (known after apply) 2026-01-13 00:02:25.280302 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.280305 | orchestrator | } 2026-01-13 00:02:25.280310 | orchestrator | 2026-01-13 00:02:25.280314 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-13 00:02:25.280318 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-13 00:02:25.280322 | orchestrator | + direction = "ingress" 2026-01-13 00:02:25.280327 | orchestrator | + ethertype = "IPv4" 2026-01-13 00:02:25.280331 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.280335 | orchestrator | + protocol = "udp" 2026-01-13 00:02:25.280339 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.280344 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-13 00:02:25.280348 | orchestrator | + remote_group_id = (known after apply) 2026-01-13 00:02:25.280352 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-13 00:02:25.280356 | orchestrator | + security_group_id = (known after apply) 2026-01-13 00:02:25.280361 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.280365 | orchestrator | } 2026-01-13 00:02:25.280369 | orchestrator | 2026-01-13 00:02:25.280373 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-13 00:02:25.280377 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-13 00:02:25.280382 | orchestrator | + direction = "ingress" 2026-01-13 00:02:25.280388 | orchestrator | + ethertype = "IPv4" 2026-01-13 00:02:25.280393 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.280397 | orchestrator | + protocol = "icmp" 2026-01-13 00:02:25.280401 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.280405 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-13 00:02:25.280410 | orchestrator | + remote_group_id = (known after apply) 2026-01-13 00:02:25.280414 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-13 00:02:25.280418 | orchestrator | + security_group_id = (known after apply) 2026-01-13 00:02:25.280423 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.280430 | orchestrator | } 2026-01-13 00:02:25.280435 | orchestrator | 2026-01-13 00:02:25.280439 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-13 00:02:25.280443 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-13 00:02:25.280448 | orchestrator | + description = "vrrp" 2026-01-13 00:02:25.280452 | orchestrator | + direction = "ingress" 2026-01-13 00:02:25.280457 | orchestrator | + ethertype = "IPv4" 2026-01-13 00:02:25.280460 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.280464 | orchestrator | + protocol = "112" 2026-01-13 00:02:25.280468 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.280482 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-13 00:02:25.280486 | orchestrator | + remote_group_id = (known after apply) 2026-01-13 00:02:25.280490 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-13 00:02:25.280494 | orchestrator | + security_group_id = (known after apply) 2026-01-13 00:02:25.280498 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.280501 | orchestrator | } 2026-01-13 00:02:25.280505 | orchestrator | 2026-01-13 00:02:25.280509 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-13 00:02:25.280513 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-13 00:02:25.280517 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.280521 | orchestrator | + description = "management security group" 2026-01-13 00:02:25.280525 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.280529 | orchestrator | + name = "testbed-management" 2026-01-13 00:02:25.280533 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.280550 | orchestrator | + stateful = (known after apply) 2026-01-13 00:02:25.280554 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.280558 | orchestrator | } 2026-01-13 00:02:25.280562 | orchestrator | 2026-01-13 00:02:25.280565 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-13 00:02:25.280569 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-13 00:02:25.280573 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.280577 | orchestrator | + description = "node security group" 2026-01-13 00:02:25.280580 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.280585 | orchestrator | + name = "testbed-node" 2026-01-13 00:02:25.280589 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.280593 | orchestrator | + stateful = (known after apply) 2026-01-13 00:02:25.280597 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.280600 | orchestrator | } 2026-01-13 00:02:25.280605 | orchestrator | 2026-01-13 00:02:25.280609 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-13 00:02:25.280613 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-13 00:02:25.280617 | orchestrator | + all_tags = (known after apply) 2026-01-13 00:02:25.280621 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-13 00:02:25.280626 | orchestrator | + dns_nameservers = [ 2026-01-13 00:02:25.280630 | orchestrator | + "8.8.8.8", 2026-01-13 00:02:25.280634 | orchestrator | + "9.9.9.9", 2026-01-13 00:02:25.280639 | orchestrator | ] 2026-01-13 00:02:25.280643 | orchestrator | + enable_dhcp = true 2026-01-13 00:02:25.280647 | orchestrator | + gateway_ip = (known after apply) 2026-01-13 00:02:25.280652 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.280656 | orchestrator | + ip_version = 4 2026-01-13 00:02:25.280660 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-13 00:02:25.280665 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-13 00:02:25.280669 | orchestrator | + name = "subnet-testbed-management" 2026-01-13 00:02:25.280673 | orchestrator | + network_id = (known after apply) 2026-01-13 00:02:25.280677 | orchestrator | + no_gateway = false 2026-01-13 00:02:25.280682 | orchestrator | + region = (known after apply) 2026-01-13 00:02:25.280686 | orchestrator | + service_types = (known after apply) 2026-01-13 00:02:25.280693 | orchestrator | + tenant_id = (known after apply) 2026-01-13 00:02:25.280698 | orchestrator | 2026-01-13 00:02:25.280702 | orchestrator | + allocation_pool { 2026-01-13 00:02:25.280706 | orchestrator | + end = "192.168.31.250" 2026-01-13 00:02:25.280711 | orchestrator | + start = "192.168.31.200" 2026-01-13 00:02:25.280715 | orchestrator | } 2026-01-13 00:02:25.280719 | orchestrator | } 2026-01-13 00:02:25.280723 | orchestrator | 2026-01-13 00:02:25.280727 | orchestrator | # terraform_data.image will be created 2026-01-13 00:02:25.280732 | orchestrator | + resource "terraform_data" "image" { 2026-01-13 00:02:25.280736 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.280740 | orchestrator | + input = "Ubuntu 24.04" 2026-01-13 00:02:25.280744 | orchestrator | + output = (known after apply) 2026-01-13 00:02:25.280749 | orchestrator | } 2026-01-13 00:02:25.280753 | orchestrator | 2026-01-13 00:02:25.280757 | orchestrator | # terraform_data.image_node will be created 2026-01-13 00:02:25.280761 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-13 00:02:25.280765 | orchestrator | + id = (known after apply) 2026-01-13 00:02:25.280769 | orchestrator | + input = "Ubuntu 24.04" 2026-01-13 00:02:25.280772 | orchestrator | + output = (known after apply) 2026-01-13 00:02:25.280776 | orchestrator | } 2026-01-13 00:02:25.280780 | orchestrator | 2026-01-13 00:02:25.280784 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-13 00:02:25.280787 | orchestrator | 2026-01-13 00:02:25.280791 | orchestrator | Changes to Outputs: 2026-01-13 00:02:25.280795 | orchestrator | + manager_address = (sensitive value) 2026-01-13 00:02:25.280799 | orchestrator | + private_key = (sensitive value) 2026-01-13 00:02:25.967668 | orchestrator | terraform_data.image_node: Creating... 2026-01-13 00:02:25.967736 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=e1c2e4bb-a477-452d-9478-bcd164c47803] 2026-01-13 00:02:25.979449 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-13 00:02:25.990724 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-13 00:02:25.990969 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-13 00:02:25.991317 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-13 00:02:26.002360 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-13 00:02:26.002427 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-13 00:02:26.005648 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-13 00:02:26.008290 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-13 00:02:26.022172 | orchestrator | terraform_data.image: Creating... 2026-01-13 00:02:26.022308 | orchestrator | terraform_data.image: Creation complete after 0s [id=d816786d-72d9-acb3-b0bb-a6b8f5fccd60] 2026-01-13 00:02:26.043320 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-13 00:02:26.047915 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-13 00:02:26.562082 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-13 00:02:26.565942 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-13 00:02:26.569758 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-13 00:02:26.571849 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-13 00:02:27.200941 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=ab1787c0-a6c8-4860-adff-4d1b12555b92] 2026-01-13 00:02:27.204000 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-13 00:02:27.261445 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-13 00:02:27.266964 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-13 00:02:29.729847 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=f69e02e7-d854-4ded-bb8d-51d0e0400336] 2026-01-13 00:02:29.744897 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-13 00:02:29.746109 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=79922d84-0445-4535-976b-32e74e35a748] 2026-01-13 00:02:29.751039 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=5295d09e-fddd-4452-8a25-9ba23e2b95ae] 2026-01-13 00:02:29.753398 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-13 00:02:29.756704 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=7d94d81cdceac9b3b6345af92bb1be3add46cc11] 2026-01-13 00:02:29.763414 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=0a292857-8cd9-4a14-95ba-a5d022f4a90e] 2026-01-13 00:02:29.763665 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-13 00:02:29.766858 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-13 00:02:29.773873 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-13 00:02:29.788084 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=49cd33e4-72cd-4f3f-940d-55c9f0f00a98] 2026-01-13 00:02:29.794531 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-13 00:02:29.813986 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=9db8234e-f6a8-4211-a809-87a509109e78] 2026-01-13 00:02:29.829407 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=5c0bff01-3898-4d25-903e-2ecdf087243c] 2026-01-13 00:02:29.835406 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-13 00:02:29.839665 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=aa1825b291d916f8a2c90b4b6acc8cc92ef06ae8] 2026-01-13 00:02:29.840372 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-13 00:02:29.845441 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-13 00:02:29.882057 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=1f00cc32-4927-4d99-9c1e-b649b1d1f573] 2026-01-13 00:02:29.892472 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=6ad71b9e-76db-4ac5-b372-050f59253056] 2026-01-13 00:02:30.691454 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=4dcaa69c-5414-4861-9f75-cc0da42200e7] 2026-01-13 00:02:30.916927 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=aeecf627-ee34-49a3-a604-f1ec5825b8f9] 2026-01-13 00:02:30.922454 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-13 00:02:33.244927 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=83689365-4423-433a-82c0-63cbcaedfdf8] 2026-01-13 00:02:33.637085 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=306cfbe9-242f-441d-bc49-37fa1b1f4569] 2026-01-13 00:02:33.637154 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=5f6d3b65-3844-4001-8889-d6deb3f0644d] 2026-01-13 00:02:33.637170 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=ffeaaf24-9754-44c8-bb36-eb3a5d2d5315] 2026-01-13 00:02:33.637185 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=d36bd727-f6fd-4e09-af6c-5d1752a9fb11] 2026-01-13 00:02:33.637196 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=a3f90d2f-3417-480d-afb3-bab2acf5e837] 2026-01-13 00:02:34.533405 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=626bfd75-5d77-4996-991e-e4b06a8e0797] 2026-01-13 00:02:34.543826 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-13 00:02:34.543895 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-13 00:02:34.544413 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-13 00:02:34.755654 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=82a0ffe4-c609-4bad-a240-ed41f4279f28] 2026-01-13 00:02:34.764931 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-13 00:02:34.768726 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-13 00:02:34.771937 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-13 00:02:34.772642 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-13 00:02:34.773462 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-13 00:02:34.775446 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-13 00:02:34.816346 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=58bf471c-d80e-418b-b4d0-705ee3d0ea1c] 2026-01-13 00:02:34.823719 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-13 00:02:34.825648 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-13 00:02:34.826667 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-13 00:02:35.086288 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=449dd116-1fca-4840-b186-c61271871b7f] 2026-01-13 00:02:35.106507 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-13 00:02:35.219647 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=cd34c19d-4240-4fd7-abcf-9ef72006f683] 2026-01-13 00:02:35.226783 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-13 00:02:35.528203 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=828cb3f4-1eb7-49b3-b1b5-d9c7daab8229] 2026-01-13 00:02:35.540180 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-13 00:02:35.588968 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=8353e7e2-b281-4ec9-9907-6ebbfc0f389b] 2026-01-13 00:02:35.597038 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-13 00:02:35.777464 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=19dacea6-f8ac-4dce-a10d-e93d240f1da4] 2026-01-13 00:02:35.790944 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-13 00:02:35.808380 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=0490a749-77b9-4f38-91ff-4b1321995ff2] 2026-01-13 00:02:35.817497 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-13 00:02:35.937666 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=21f6ad4e-2daf-49cf-9f3a-83bd124ef2b6] 2026-01-13 00:02:35.944428 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=3ca4a049-0969-4712-9f12-0f3f489ff07d] 2026-01-13 00:02:35.952866 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-13 00:02:36.023618 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=f680df32-572e-4bf4-b0da-1405d80f137d] 2026-01-13 00:02:36.208002 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=43d3dec3-2b1f-4793-989e-a338aaadfc07] 2026-01-13 00:02:36.229684 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=5133aec3-bcc8-4613-a099-273522a85686] 2026-01-13 00:02:36.591690 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=033250be-3730-4e28-8bae-574aa81261e3] 2026-01-13 00:02:36.764136 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=b46751e7-e6e1-4843-8018-ec0fe26ce24c] 2026-01-13 00:02:36.875375 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=f9c66452-e535-44f2-abde-2653138d6638] 2026-01-13 00:02:37.369742 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=dc661a00-4fe6-413a-9683-d66479df831d] 2026-01-13 00:02:37.820884 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=e4c1491f-c59c-42b7-8ed3-e8c0ebd3203b] 2026-01-13 00:02:37.826854 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-13 00:02:38.027715 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=93e66259-6123-483f-83e0-5ca60df99fb6] 2026-01-13 00:02:38.052605 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-13 00:02:38.054231 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-13 00:02:38.059474 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-13 00:02:38.059896 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-13 00:02:38.080612 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-13 00:02:38.086786 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-13 00:02:40.229756 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=1665fef5-3425-4ab0-90ee-a0f1212f09dc] 2026-01-13 00:02:40.239287 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-13 00:02:40.244241 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-13 00:02:40.250838 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=01887e5a3def8b233d771971aedf02fc115bae67] 2026-01-13 00:02:40.259563 | orchestrator | local_file.inventory: Creating... 2026-01-13 00:02:40.262745 | orchestrator | local_file.inventory: Creation complete after 0s [id=b287740adb6f33ee390d5b061cc2099583ba72f7] 2026-01-13 00:02:41.892173 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=1665fef5-3425-4ab0-90ee-a0f1212f09dc] 2026-01-13 00:02:48.059886 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-13 00:02:48.061177 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-13 00:02:48.062244 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-13 00:02:48.062525 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-13 00:02:48.081771 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-13 00:02:48.088110 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-13 00:02:58.060929 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-13 00:02:58.061927 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-13 00:02:58.063065 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-13 00:02:58.063088 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-13 00:02:58.082723 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-13 00:02:58.089108 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-13 00:03:08.069156 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-13 00:03:08.069253 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-13 00:03:08.069261 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-13 00:03:08.069274 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-13 00:03:08.083633 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-13 00:03:08.089996 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-13 00:03:08.693194 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=4fd6f2b6-51ef-4e2f-bb54-ee6be11bcc4c] 2026-01-13 00:03:08.990444 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=71ad69d2-ecf0-4a82-9d7a-7b6f0722f356] 2026-01-13 00:03:09.025246 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=a714327d-623a-4d36-8acf-a2e00af61082] 2026-01-13 00:03:18.077687 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-01-13 00:03:18.077804 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-01-13 00:03:18.084079 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-01-13 00:03:18.863315 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=465cb251-a060-4c51-96e6-b5bbb23ace7b] 2026-01-13 00:03:19.034835 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=e7b65721-5a14-41aa-a13c-e8612ce2c217] 2026-01-13 00:03:20.158968 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 42s [id=57e5bb4e-daf6-427c-b4df-17c8f7f3e79e] 2026-01-13 00:03:20.189727 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-13 00:03:20.197679 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-13 00:03:20.202234 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-13 00:03:20.203736 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-13 00:03:20.204191 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-13 00:03:20.205482 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-13 00:03:20.209630 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=7303008649623477253] 2026-01-13 00:03:20.212607 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-13 00:03:20.219657 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-13 00:03:20.219846 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-13 00:03:20.220037 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-13 00:03:20.242604 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-13 00:03:23.877966 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=4fd6f2b6-51ef-4e2f-bb54-ee6be11bcc4c/5295d09e-fddd-4452-8a25-9ba23e2b95ae] 2026-01-13 00:03:23.890691 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=465cb251-a060-4c51-96e6-b5bbb23ace7b/0a292857-8cd9-4a14-95ba-a5d022f4a90e] 2026-01-13 00:03:23.909169 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=4fd6f2b6-51ef-4e2f-bb54-ee6be11bcc4c/f69e02e7-d854-4ded-bb8d-51d0e0400336] 2026-01-13 00:03:23.928987 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=e7b65721-5a14-41aa-a13c-e8612ce2c217/9db8234e-f6a8-4211-a809-87a509109e78] 2026-01-13 00:03:23.929413 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=e7b65721-5a14-41aa-a13c-e8612ce2c217/5c0bff01-3898-4d25-903e-2ecdf087243c] 2026-01-13 00:03:23.950613 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=465cb251-a060-4c51-96e6-b5bbb23ace7b/1f00cc32-4927-4d99-9c1e-b649b1d1f573] 2026-01-13 00:03:30.026977 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=465cb251-a060-4c51-96e6-b5bbb23ace7b/49cd33e4-72cd-4f3f-940d-55c9f0f00a98] 2026-01-13 00:03:30.045281 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=4fd6f2b6-51ef-4e2f-bb54-ee6be11bcc4c/79922d84-0445-4535-976b-32e74e35a748] 2026-01-13 00:03:30.063782 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=e7b65721-5a14-41aa-a13c-e8612ce2c217/6ad71b9e-76db-4ac5-b372-050f59253056] 2026-01-13 00:03:30.244783 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-13 00:03:40.253840 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-13 00:03:40.752495 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=d3b2fcd4-dac4-415f-a170-928342d9466c] 2026-01-13 00:03:40.763948 | orchestrator | 2026-01-13 00:03:40.764000 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-13 00:03:40.764007 | orchestrator | 2026-01-13 00:03:40.764024 | orchestrator | Outputs: 2026-01-13 00:03:40.764029 | orchestrator | 2026-01-13 00:03:40.764034 | orchestrator | manager_address = 2026-01-13 00:03:40.764038 | orchestrator | private_key = 2026-01-13 00:03:41.242234 | orchestrator | ok: Runtime: 0:01:20.334471 2026-01-13 00:03:41.275613 | 2026-01-13 00:03:41.275983 | TASK [Create infrastructure (stable)] 2026-01-13 00:03:41.823894 | orchestrator | skipping: Conditional result was False 2026-01-13 00:03:41.838468 | 2026-01-13 00:03:41.838607 | TASK [Fetch manager address] 2026-01-13 00:03:42.307355 | orchestrator | ok 2026-01-13 00:03:42.318514 | 2026-01-13 00:03:42.318686 | TASK [Set manager_host address] 2026-01-13 00:03:42.405335 | orchestrator | ok 2026-01-13 00:03:42.418641 | 2026-01-13 00:03:42.419079 | LOOP [Update ansible collections] 2026-01-13 00:03:43.280941 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-13 00:03:43.281296 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-13 00:03:43.281344 | orchestrator | Starting galaxy collection install process 2026-01-13 00:03:43.281375 | orchestrator | Process install dependency map 2026-01-13 00:03:43.281402 | orchestrator | Starting collection install process 2026-01-13 00:03:43.281428 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-01-13 00:03:43.281459 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-01-13 00:03:43.281499 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-13 00:03:43.281563 | orchestrator | ok: Item: commons Runtime: 0:00:00.531083 2026-01-13 00:03:44.384693 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-13 00:03:44.384971 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-13 00:03:44.385253 | orchestrator | Starting galaxy collection install process 2026-01-13 00:03:44.385316 | orchestrator | Process install dependency map 2026-01-13 00:03:44.385368 | orchestrator | Starting collection install process 2026-01-13 00:03:44.385415 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-01-13 00:03:44.385462 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-01-13 00:03:44.385505 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-13 00:03:44.385575 | orchestrator | ok: Item: services Runtime: 0:00:00.814499 2026-01-13 00:03:44.406636 | 2026-01-13 00:03:44.406884 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-13 00:03:55.451602 | orchestrator | ok 2026-01-13 00:03:55.464499 | 2026-01-13 00:03:55.464644 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-13 00:04:55.505953 | orchestrator | ok 2026-01-13 00:04:55.514060 | 2026-01-13 00:04:55.514186 | TASK [Fetch manager ssh hostkey] 2026-01-13 00:04:57.089452 | orchestrator | Output suppressed because no_log was given 2026-01-13 00:04:57.106227 | 2026-01-13 00:04:57.106400 | TASK [Get ssh keypair from terraform environment] 2026-01-13 00:04:57.646382 | orchestrator | ok: Runtime: 0:00:00.006816 2026-01-13 00:04:57.666450 | 2026-01-13 00:04:57.666671 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-13 00:04:57.717049 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-13 00:04:57.728042 | 2026-01-13 00:04:57.728179 | TASK [Run manager part 0] 2026-01-13 00:04:58.775235 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-13 00:04:58.841487 | orchestrator | 2026-01-13 00:04:58.841540 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-13 00:04:58.841547 | orchestrator | 2026-01-13 00:04:58.841562 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-13 00:05:00.710953 | orchestrator | ok: [testbed-manager] 2026-01-13 00:05:00.710993 | orchestrator | 2026-01-13 00:05:00.711014 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-13 00:05:00.711023 | orchestrator | 2026-01-13 00:05:00.711031 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-13 00:05:02.518068 | orchestrator | ok: [testbed-manager] 2026-01-13 00:05:02.518182 | orchestrator | 2026-01-13 00:05:02.518193 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-13 00:05:03.211244 | orchestrator | ok: [testbed-manager] 2026-01-13 00:05:03.211317 | orchestrator | 2026-01-13 00:05:03.211328 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-13 00:05:03.263657 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:05:03.263725 | orchestrator | 2026-01-13 00:05:03.263741 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-13 00:05:03.293023 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:05:03.293090 | orchestrator | 2026-01-13 00:05:03.293104 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-13 00:05:03.325876 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:05:03.325936 | orchestrator | 2026-01-13 00:05:03.325942 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-13 00:05:03.356888 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:05:03.356963 | orchestrator | 2026-01-13 00:05:03.356975 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-13 00:05:03.390608 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:05:03.390684 | orchestrator | 2026-01-13 00:05:03.390698 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-13 00:05:03.427920 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:05:03.427992 | orchestrator | 2026-01-13 00:05:03.428002 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-13 00:05:03.460900 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:05:03.460967 | orchestrator | 2026-01-13 00:05:03.460975 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-13 00:05:04.192861 | orchestrator | changed: [testbed-manager] 2026-01-13 00:05:04.192904 | orchestrator | 2026-01-13 00:05:04.192913 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-13 00:07:41.706381 | orchestrator | changed: [testbed-manager] 2026-01-13 00:07:41.706508 | orchestrator | 2026-01-13 00:07:41.706531 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-13 00:08:55.860756 | orchestrator | changed: [testbed-manager] 2026-01-13 00:08:55.860855 | orchestrator | 2026-01-13 00:08:55.860872 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-13 00:09:15.579270 | orchestrator | changed: [testbed-manager] 2026-01-13 00:09:15.579368 | orchestrator | 2026-01-13 00:09:15.579386 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-13 00:09:24.253307 | orchestrator | changed: [testbed-manager] 2026-01-13 00:09:24.253442 | orchestrator | 2026-01-13 00:09:24.253456 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-13 00:09:24.300720 | orchestrator | ok: [testbed-manager] 2026-01-13 00:09:24.300840 | orchestrator | 2026-01-13 00:09:24.300857 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-13 00:09:25.118406 | orchestrator | ok: [testbed-manager] 2026-01-13 00:09:25.119313 | orchestrator | 2026-01-13 00:09:25.119476 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-13 00:09:25.866280 | orchestrator | changed: [testbed-manager] 2026-01-13 00:09:25.866405 | orchestrator | 2026-01-13 00:09:25.866424 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-13 00:09:32.176803 | orchestrator | changed: [testbed-manager] 2026-01-13 00:09:32.176894 | orchestrator | 2026-01-13 00:09:32.176935 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-13 00:09:38.174870 | orchestrator | changed: [testbed-manager] 2026-01-13 00:09:38.174991 | orchestrator | 2026-01-13 00:09:38.175011 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-13 00:09:40.834719 | orchestrator | changed: [testbed-manager] 2026-01-13 00:09:40.834806 | orchestrator | 2026-01-13 00:09:40.834821 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-13 00:09:42.547589 | orchestrator | changed: [testbed-manager] 2026-01-13 00:09:42.548434 | orchestrator | 2026-01-13 00:09:42.548485 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-13 00:09:43.647564 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-13 00:09:43.647686 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-13 00:09:43.647704 | orchestrator | 2026-01-13 00:09:43.647718 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-13 00:09:43.685326 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-13 00:09:43.685382 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-13 00:09:43.685388 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-13 00:09:43.685393 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-13 00:09:48.317761 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-13 00:09:48.317837 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-13 00:09:48.317849 | orchestrator | 2026-01-13 00:09:48.317860 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-13 00:09:48.904046 | orchestrator | changed: [testbed-manager] 2026-01-13 00:09:48.904109 | orchestrator | 2026-01-13 00:09:48.904118 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-13 00:10:10.540346 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-13 00:10:10.540410 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-13 00:10:10.540427 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-13 00:10:10.540440 | orchestrator | 2026-01-13 00:10:10.540452 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-13 00:10:12.807720 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-13 00:10:12.807755 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-13 00:10:12.807760 | orchestrator | 2026-01-13 00:10:12.807765 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-13 00:10:12.807770 | orchestrator | 2026-01-13 00:10:12.807774 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-13 00:10:14.209712 | orchestrator | ok: [testbed-manager] 2026-01-13 00:10:14.209781 | orchestrator | 2026-01-13 00:10:14.209794 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-13 00:10:14.259397 | orchestrator | ok: [testbed-manager] 2026-01-13 00:10:14.259475 | orchestrator | 2026-01-13 00:10:14.259489 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-13 00:10:14.363163 | orchestrator | ok: [testbed-manager] 2026-01-13 00:10:14.363255 | orchestrator | 2026-01-13 00:10:14.363274 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-13 00:10:15.102094 | orchestrator | changed: [testbed-manager] 2026-01-13 00:10:15.102187 | orchestrator | 2026-01-13 00:10:15.102205 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-13 00:10:15.815620 | orchestrator | changed: [testbed-manager] 2026-01-13 00:10:15.815665 | orchestrator | 2026-01-13 00:10:15.815674 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-13 00:10:17.152891 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-13 00:10:17.152950 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-13 00:10:17.152962 | orchestrator | 2026-01-13 00:10:17.152985 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-13 00:10:18.523731 | orchestrator | changed: [testbed-manager] 2026-01-13 00:10:18.523784 | orchestrator | 2026-01-13 00:10:18.523791 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-13 00:10:20.256024 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-13 00:10:20.256300 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-13 00:10:20.256325 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-13 00:10:20.256338 | orchestrator | 2026-01-13 00:10:20.256499 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-13 00:10:20.315219 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:10:20.315334 | orchestrator | 2026-01-13 00:10:20.315352 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-13 00:10:20.388782 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:10:20.388869 | orchestrator | 2026-01-13 00:10:20.388888 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-13 00:10:20.949428 | orchestrator | changed: [testbed-manager] 2026-01-13 00:10:20.949492 | orchestrator | 2026-01-13 00:10:20.949500 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-13 00:10:21.040771 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:10:21.040822 | orchestrator | 2026-01-13 00:10:21.040829 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-13 00:10:21.930109 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-13 00:10:21.930156 | orchestrator | changed: [testbed-manager] 2026-01-13 00:10:21.930165 | orchestrator | 2026-01-13 00:10:21.930173 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-13 00:10:21.974722 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:10:21.974759 | orchestrator | 2026-01-13 00:10:21.974766 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-13 00:10:22.011141 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:10:22.011181 | orchestrator | 2026-01-13 00:10:22.011189 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-13 00:10:22.047575 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:10:22.047632 | orchestrator | 2026-01-13 00:10:22.047642 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-13 00:10:22.138886 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:10:22.138948 | orchestrator | 2026-01-13 00:10:22.138957 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-13 00:10:22.843492 | orchestrator | ok: [testbed-manager] 2026-01-13 00:10:22.843536 | orchestrator | 2026-01-13 00:10:22.843542 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-13 00:10:22.843548 | orchestrator | 2026-01-13 00:10:22.843552 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-13 00:10:24.248849 | orchestrator | ok: [testbed-manager] 2026-01-13 00:10:24.248904 | orchestrator | 2026-01-13 00:10:24.248917 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-13 00:10:25.175525 | orchestrator | changed: [testbed-manager] 2026-01-13 00:10:25.175594 | orchestrator | 2026-01-13 00:10:25.175614 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:10:25.175631 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-13 00:10:25.175646 | orchestrator | 2026-01-13 00:10:25.457534 | orchestrator | ok: Runtime: 0:05:27.217180 2026-01-13 00:10:25.477994 | 2026-01-13 00:10:25.478208 | TASK [Point out that the log in on the manager is now possible] 2026-01-13 00:10:25.530778 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-13 00:10:25.543048 | 2026-01-13 00:10:25.543270 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-13 00:10:25.593606 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-13 00:10:25.604195 | 2026-01-13 00:10:25.604355 | TASK [Run manager part 1 + 2] 2026-01-13 00:10:27.040703 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-13 00:10:27.100629 | orchestrator | 2026-01-13 00:10:27.100681 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-13 00:10:27.100688 | orchestrator | 2026-01-13 00:10:27.100700 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-13 00:10:29.610923 | orchestrator | ok: [testbed-manager] 2026-01-13 00:10:29.610971 | orchestrator | 2026-01-13 00:10:29.610994 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-13 00:10:29.648152 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:10:29.648193 | orchestrator | 2026-01-13 00:10:29.648201 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-13 00:10:29.689704 | orchestrator | ok: [testbed-manager] 2026-01-13 00:10:29.689757 | orchestrator | 2026-01-13 00:10:29.689765 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-13 00:10:29.741221 | orchestrator | ok: [testbed-manager] 2026-01-13 00:10:29.741278 | orchestrator | 2026-01-13 00:10:29.741290 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-13 00:10:29.804718 | orchestrator | ok: [testbed-manager] 2026-01-13 00:10:29.804954 | orchestrator | 2026-01-13 00:10:29.804969 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-13 00:10:29.868886 | orchestrator | ok: [testbed-manager] 2026-01-13 00:10:29.868938 | orchestrator | 2026-01-13 00:10:29.868947 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-13 00:10:29.911827 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-13 00:10:29.911873 | orchestrator | 2026-01-13 00:10:29.911878 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-13 00:10:30.637134 | orchestrator | ok: [testbed-manager] 2026-01-13 00:10:30.637189 | orchestrator | 2026-01-13 00:10:30.637198 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-13 00:10:30.683952 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:10:30.684000 | orchestrator | 2026-01-13 00:10:30.684007 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-13 00:10:32.132626 | orchestrator | changed: [testbed-manager] 2026-01-13 00:10:32.132883 | orchestrator | 2026-01-13 00:10:32.132914 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-13 00:10:32.712805 | orchestrator | ok: [testbed-manager] 2026-01-13 00:10:32.712863 | orchestrator | 2026-01-13 00:10:32.712872 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-13 00:10:34.074138 | orchestrator | changed: [testbed-manager] 2026-01-13 00:10:34.074200 | orchestrator | 2026-01-13 00:10:34.074216 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-13 00:10:49.644767 | orchestrator | changed: [testbed-manager] 2026-01-13 00:10:49.645054 | orchestrator | 2026-01-13 00:10:49.645074 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-13 00:10:50.323994 | orchestrator | ok: [testbed-manager] 2026-01-13 00:10:50.324094 | orchestrator | 2026-01-13 00:10:50.324111 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-13 00:10:50.381550 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:10:50.381642 | orchestrator | 2026-01-13 00:10:50.381659 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-13 00:10:51.320225 | orchestrator | changed: [testbed-manager] 2026-01-13 00:10:51.320342 | orchestrator | 2026-01-13 00:10:51.320358 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-13 00:10:52.317455 | orchestrator | changed: [testbed-manager] 2026-01-13 00:10:52.317505 | orchestrator | 2026-01-13 00:10:52.317511 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-13 00:10:52.876930 | orchestrator | changed: [testbed-manager] 2026-01-13 00:10:52.877001 | orchestrator | 2026-01-13 00:10:52.877017 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-13 00:10:52.914899 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-13 00:10:52.914958 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-13 00:10:52.914964 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-13 00:10:52.914969 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-13 00:10:55.380463 | orchestrator | changed: [testbed-manager] 2026-01-13 00:10:55.380567 | orchestrator | 2026-01-13 00:10:55.380587 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-13 00:11:04.305080 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-13 00:11:04.305122 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-13 00:11:04.305129 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-13 00:11:04.305134 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-13 00:11:04.305141 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-13 00:11:04.305146 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-13 00:11:04.305150 | orchestrator | 2026-01-13 00:11:04.305155 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-13 00:11:05.305332 | orchestrator | changed: [testbed-manager] 2026-01-13 00:11:05.305570 | orchestrator | 2026-01-13 00:11:05.305589 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-13 00:11:05.355017 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:11:05.355074 | orchestrator | 2026-01-13 00:11:05.355082 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-13 00:11:09.301474 | orchestrator | changed: [testbed-manager] 2026-01-13 00:11:09.301544 | orchestrator | 2026-01-13 00:11:09.301561 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-13 00:11:09.348850 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:11:09.348934 | orchestrator | 2026-01-13 00:11:09.348950 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-13 00:12:46.399680 | orchestrator | changed: [testbed-manager] 2026-01-13 00:12:46.399782 | orchestrator | 2026-01-13 00:12:46.399803 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-13 00:12:47.496263 | orchestrator | ok: [testbed-manager] 2026-01-13 00:12:47.496346 | orchestrator | 2026-01-13 00:12:47.496363 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:12:47.496377 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-13 00:12:47.496388 | orchestrator | 2026-01-13 00:12:47.748255 | orchestrator | ok: Runtime: 0:02:21.679023 2026-01-13 00:12:47.765645 | 2026-01-13 00:12:47.765844 | TASK [Reboot manager] 2026-01-13 00:12:49.311390 | orchestrator | ok: Runtime: 0:00:00.963978 2026-01-13 00:12:49.328317 | 2026-01-13 00:12:49.328487 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-13 00:13:08.248971 | orchestrator | ok 2026-01-13 00:13:08.258164 | 2026-01-13 00:13:08.258341 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-13 00:14:08.306655 | orchestrator | ok 2026-01-13 00:14:08.316859 | 2026-01-13 00:14:08.317016 | TASK [Deploy manager + bootstrap nodes] 2026-01-13 00:14:10.924027 | orchestrator | 2026-01-13 00:14:10.924297 | orchestrator | # DEPLOY MANAGER 2026-01-13 00:14:10.924326 | orchestrator | 2026-01-13 00:14:10.924342 | orchestrator | + set -e 2026-01-13 00:14:10.924356 | orchestrator | + echo 2026-01-13 00:14:10.924372 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-13 00:14:10.924389 | orchestrator | + echo 2026-01-13 00:14:10.924437 | orchestrator | + cat /opt/manager-vars.sh 2026-01-13 00:14:10.927460 | orchestrator | export NUMBER_OF_NODES=6 2026-01-13 00:14:10.927541 | orchestrator | 2026-01-13 00:14:10.927556 | orchestrator | export CEPH_VERSION=reef 2026-01-13 00:14:10.927571 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-13 00:14:10.927583 | orchestrator | export MANAGER_VERSION=latest 2026-01-13 00:14:10.927610 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-13 00:14:10.927622 | orchestrator | 2026-01-13 00:14:10.927639 | orchestrator | export ARA=false 2026-01-13 00:14:10.927651 | orchestrator | export DEPLOY_MODE=manager 2026-01-13 00:14:10.927668 | orchestrator | export TEMPEST=true 2026-01-13 00:14:10.927679 | orchestrator | export IS_ZUUL=true 2026-01-13 00:14:10.927690 | orchestrator | 2026-01-13 00:14:10.927708 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.234 2026-01-13 00:14:10.927719 | orchestrator | export EXTERNAL_API=false 2026-01-13 00:14:10.927730 | orchestrator | 2026-01-13 00:14:10.927741 | orchestrator | export IMAGE_USER=ubuntu 2026-01-13 00:14:10.927754 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-13 00:14:10.927765 | orchestrator | 2026-01-13 00:14:10.927776 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-13 00:14:10.927797 | orchestrator | 2026-01-13 00:14:10.927809 | orchestrator | + echo 2026-01-13 00:14:10.927821 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-13 00:14:10.928438 | orchestrator | ++ export INTERACTIVE=false 2026-01-13 00:14:10.928470 | orchestrator | ++ INTERACTIVE=false 2026-01-13 00:14:10.928489 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-13 00:14:10.928508 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-13 00:14:10.928780 | orchestrator | + source /opt/manager-vars.sh 2026-01-13 00:14:10.928803 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-13 00:14:10.928815 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-13 00:14:10.928903 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-13 00:14:10.928918 | orchestrator | ++ CEPH_VERSION=reef 2026-01-13 00:14:10.929067 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-13 00:14:10.929109 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-13 00:14:10.929121 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-13 00:14:10.929132 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-13 00:14:10.929144 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-13 00:14:10.929167 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-13 00:14:10.929178 | orchestrator | ++ export ARA=false 2026-01-13 00:14:10.929189 | orchestrator | ++ ARA=false 2026-01-13 00:14:10.929200 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-13 00:14:10.929211 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-13 00:14:10.929230 | orchestrator | ++ export TEMPEST=true 2026-01-13 00:14:10.929249 | orchestrator | ++ TEMPEST=true 2026-01-13 00:14:10.929278 | orchestrator | ++ export IS_ZUUL=true 2026-01-13 00:14:10.929298 | orchestrator | ++ IS_ZUUL=true 2026-01-13 00:14:10.929316 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.234 2026-01-13 00:14:10.929334 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.234 2026-01-13 00:14:10.929398 | orchestrator | ++ export EXTERNAL_API=false 2026-01-13 00:14:10.929419 | orchestrator | ++ EXTERNAL_API=false 2026-01-13 00:14:10.929432 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-13 00:14:10.929442 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-13 00:14:10.929453 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-13 00:14:10.929464 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-13 00:14:10.929475 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-13 00:14:10.929485 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-13 00:14:10.929496 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-13 00:14:10.985916 | orchestrator | + docker version 2026-01-13 00:14:11.272231 | orchestrator | Client: Docker Engine - Community 2026-01-13 00:14:11.272355 | orchestrator | Version: 27.5.1 2026-01-13 00:14:11.272374 | orchestrator | API version: 1.47 2026-01-13 00:14:11.272388 | orchestrator | Go version: go1.22.11 2026-01-13 00:14:11.272399 | orchestrator | Git commit: 9f9e405 2026-01-13 00:14:11.272410 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-13 00:14:11.272423 | orchestrator | OS/Arch: linux/amd64 2026-01-13 00:14:11.272434 | orchestrator | Context: default 2026-01-13 00:14:11.272445 | orchestrator | 2026-01-13 00:14:11.272456 | orchestrator | Server: Docker Engine - Community 2026-01-13 00:14:11.272467 | orchestrator | Engine: 2026-01-13 00:14:11.272478 | orchestrator | Version: 27.5.1 2026-01-13 00:14:11.272490 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-13 00:14:11.272531 | orchestrator | Go version: go1.22.11 2026-01-13 00:14:11.272543 | orchestrator | Git commit: 4c9b3b0 2026-01-13 00:14:11.272554 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-13 00:14:11.272565 | orchestrator | OS/Arch: linux/amd64 2026-01-13 00:14:11.272576 | orchestrator | Experimental: false 2026-01-13 00:14:11.272587 | orchestrator | containerd: 2026-01-13 00:14:11.272598 | orchestrator | Version: v2.2.1 2026-01-13 00:14:11.272609 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-13 00:14:11.272620 | orchestrator | runc: 2026-01-13 00:14:11.272631 | orchestrator | Version: 1.3.4 2026-01-13 00:14:11.272642 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-13 00:14:11.272653 | orchestrator | docker-init: 2026-01-13 00:14:11.272663 | orchestrator | Version: 0.19.0 2026-01-13 00:14:11.272675 | orchestrator | GitCommit: de40ad0 2026-01-13 00:14:11.275744 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-13 00:14:11.285814 | orchestrator | + set -e 2026-01-13 00:14:11.285960 | orchestrator | + source /opt/manager-vars.sh 2026-01-13 00:14:11.285978 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-13 00:14:11.285991 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-13 00:14:11.286001 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-13 00:14:11.286010 | orchestrator | ++ CEPH_VERSION=reef 2026-01-13 00:14:11.286113 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-13 00:14:11.286127 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-13 00:14:11.286136 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-13 00:14:11.286146 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-13 00:14:11.286156 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-13 00:14:11.286166 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-13 00:14:11.286175 | orchestrator | ++ export ARA=false 2026-01-13 00:14:11.286185 | orchestrator | ++ ARA=false 2026-01-13 00:14:11.286195 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-13 00:14:11.286205 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-13 00:14:11.286215 | orchestrator | ++ export TEMPEST=true 2026-01-13 00:14:11.286224 | orchestrator | ++ TEMPEST=true 2026-01-13 00:14:11.286234 | orchestrator | ++ export IS_ZUUL=true 2026-01-13 00:14:11.286243 | orchestrator | ++ IS_ZUUL=true 2026-01-13 00:14:11.286252 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.234 2026-01-13 00:14:11.286262 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.234 2026-01-13 00:14:11.286272 | orchestrator | ++ export EXTERNAL_API=false 2026-01-13 00:14:11.286281 | orchestrator | ++ EXTERNAL_API=false 2026-01-13 00:14:11.286291 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-13 00:14:11.286300 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-13 00:14:11.286309 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-13 00:14:11.286319 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-13 00:14:11.286329 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-13 00:14:11.286339 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-13 00:14:11.286348 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-13 00:14:11.286369 | orchestrator | ++ export INTERACTIVE=false 2026-01-13 00:14:11.286379 | orchestrator | ++ INTERACTIVE=false 2026-01-13 00:14:11.286388 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-13 00:14:11.286402 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-13 00:14:11.286412 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-13 00:14:11.286421 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-13 00:14:11.286431 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-13 00:14:11.293024 | orchestrator | + set -e 2026-01-13 00:14:11.293109 | orchestrator | + VERSION=reef 2026-01-13 00:14:11.294361 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-13 00:14:11.300309 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-13 00:14:11.300367 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-13 00:14:11.306181 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-01-13 00:14:11.312211 | orchestrator | + set -e 2026-01-13 00:14:11.312265 | orchestrator | + VERSION=2024.2 2026-01-13 00:14:11.312626 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-13 00:14:11.316542 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-13 00:14:11.316601 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-01-13 00:14:11.321822 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-13 00:14:11.322558 | orchestrator | ++ semver latest 7.0.0 2026-01-13 00:14:11.384760 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-13 00:14:11.384821 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-13 00:14:11.384828 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-13 00:14:11.385601 | orchestrator | ++ semver latest 10.0.0-0 2026-01-13 00:14:11.442961 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-13 00:14:11.443776 | orchestrator | ++ semver 2024.2 2025.1 2026-01-13 00:14:11.502862 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-13 00:14:11.502957 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-13 00:14:11.597273 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-13 00:14:11.598424 | orchestrator | + source /opt/venv/bin/activate 2026-01-13 00:14:11.599475 | orchestrator | ++ deactivate nondestructive 2026-01-13 00:14:11.599510 | orchestrator | ++ '[' -n '' ']' 2026-01-13 00:14:11.599524 | orchestrator | ++ '[' -n '' ']' 2026-01-13 00:14:11.599543 | orchestrator | ++ hash -r 2026-01-13 00:14:11.599557 | orchestrator | ++ '[' -n '' ']' 2026-01-13 00:14:11.599569 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-13 00:14:11.599582 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-13 00:14:11.599598 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-13 00:14:11.599615 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-13 00:14:11.599628 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-13 00:14:11.599640 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-13 00:14:11.599652 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-13 00:14:11.599669 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-13 00:14:11.599877 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-13 00:14:11.599905 | orchestrator | ++ export PATH 2026-01-13 00:14:11.599925 | orchestrator | ++ '[' -n '' ']' 2026-01-13 00:14:11.599945 | orchestrator | ++ '[' -z '' ']' 2026-01-13 00:14:11.599964 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-13 00:14:11.599990 | orchestrator | ++ PS1='(venv) ' 2026-01-13 00:14:11.600012 | orchestrator | ++ export PS1 2026-01-13 00:14:11.600032 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-13 00:14:11.600052 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-13 00:14:11.600065 | orchestrator | ++ hash -r 2026-01-13 00:14:11.600132 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-13 00:14:14.983897 | orchestrator | 2026-01-13 00:14:14.984027 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-13 00:14:14.984057 | orchestrator | 2026-01-13 00:14:14.984105 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-13 00:14:15.580401 | orchestrator | ok: [testbed-manager] 2026-01-13 00:14:15.580520 | orchestrator | 2026-01-13 00:14:15.580550 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-13 00:14:16.597548 | orchestrator | changed: [testbed-manager] 2026-01-13 00:14:16.597677 | orchestrator | 2026-01-13 00:14:16.597705 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-13 00:14:16.597725 | orchestrator | 2026-01-13 00:14:16.597745 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-13 00:14:19.021864 | orchestrator | ok: [testbed-manager] 2026-01-13 00:14:19.021964 | orchestrator | 2026-01-13 00:14:19.021981 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-13 00:14:19.076193 | orchestrator | ok: [testbed-manager] 2026-01-13 00:14:19.076271 | orchestrator | 2026-01-13 00:14:19.076285 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-13 00:14:19.561352 | orchestrator | changed: [testbed-manager] 2026-01-13 00:14:19.561450 | orchestrator | 2026-01-13 00:14:19.561466 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-13 00:14:19.605821 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:14:19.605920 | orchestrator | 2026-01-13 00:14:19.605936 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-13 00:14:19.977422 | orchestrator | changed: [testbed-manager] 2026-01-13 00:14:19.977540 | orchestrator | 2026-01-13 00:14:19.977558 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-13 00:14:20.035414 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:14:20.035509 | orchestrator | 2026-01-13 00:14:20.035526 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-13 00:14:20.391487 | orchestrator | ok: [testbed-manager] 2026-01-13 00:14:20.391606 | orchestrator | 2026-01-13 00:14:20.391623 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-13 00:14:20.519105 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:14:20.519201 | orchestrator | 2026-01-13 00:14:20.519218 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-13 00:14:20.519231 | orchestrator | 2026-01-13 00:14:20.519243 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-13 00:14:22.307285 | orchestrator | ok: [testbed-manager] 2026-01-13 00:14:22.307402 | orchestrator | 2026-01-13 00:14:22.307419 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-13 00:14:22.403589 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-13 00:14:22.403678 | orchestrator | 2026-01-13 00:14:22.403695 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-13 00:14:22.459576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-13 00:14:22.459659 | orchestrator | 2026-01-13 00:14:22.459675 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-13 00:14:23.586354 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-13 00:14:23.586455 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-13 00:14:23.586472 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-13 00:14:23.586485 | orchestrator | 2026-01-13 00:14:23.586498 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-13 00:14:25.431768 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-13 00:14:25.431849 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-13 00:14:25.431863 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-13 00:14:25.431877 | orchestrator | 2026-01-13 00:14:25.431891 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-13 00:14:26.056136 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-13 00:14:26.056228 | orchestrator | changed: [testbed-manager] 2026-01-13 00:14:26.056247 | orchestrator | 2026-01-13 00:14:26.056260 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-13 00:14:26.698847 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-13 00:14:26.698946 | orchestrator | changed: [testbed-manager] 2026-01-13 00:14:26.698963 | orchestrator | 2026-01-13 00:14:26.698975 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-13 00:14:26.755131 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:14:26.755226 | orchestrator | 2026-01-13 00:14:26.755242 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-13 00:14:27.141764 | orchestrator | ok: [testbed-manager] 2026-01-13 00:14:27.141863 | orchestrator | 2026-01-13 00:14:27.141881 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-13 00:14:27.214966 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-13 00:14:27.215064 | orchestrator | 2026-01-13 00:14:27.215117 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-13 00:14:28.289765 | orchestrator | changed: [testbed-manager] 2026-01-13 00:14:28.289843 | orchestrator | 2026-01-13 00:14:28.289864 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-13 00:14:29.076574 | orchestrator | changed: [testbed-manager] 2026-01-13 00:14:29.076655 | orchestrator | 2026-01-13 00:14:29.076670 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-13 00:14:52.980729 | orchestrator | changed: [testbed-manager] 2026-01-13 00:14:52.980836 | orchestrator | 2026-01-13 00:14:52.980855 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-13 00:14:53.029174 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:14:53.029261 | orchestrator | 2026-01-13 00:14:53.029273 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-13 00:14:53.029311 | orchestrator | 2026-01-13 00:14:53.029321 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-13 00:14:54.771863 | orchestrator | ok: [testbed-manager] 2026-01-13 00:14:54.771958 | orchestrator | 2026-01-13 00:14:54.771974 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-13 00:14:54.884340 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-13 00:14:54.884431 | orchestrator | 2026-01-13 00:14:54.884447 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-13 00:14:54.936833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-13 00:14:54.936913 | orchestrator | 2026-01-13 00:14:54.936930 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-13 00:14:57.453556 | orchestrator | ok: [testbed-manager] 2026-01-13 00:14:57.453657 | orchestrator | 2026-01-13 00:14:57.453674 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-13 00:14:57.495284 | orchestrator | ok: [testbed-manager] 2026-01-13 00:14:57.495364 | orchestrator | 2026-01-13 00:14:57.495377 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-13 00:14:57.622227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-13 00:14:57.622313 | orchestrator | 2026-01-13 00:14:57.622328 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-13 00:15:00.371170 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-13 00:15:00.371275 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-13 00:15:00.371291 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-13 00:15:00.371304 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-13 00:15:00.371315 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-13 00:15:00.371326 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-13 00:15:00.371337 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-13 00:15:00.371348 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-13 00:15:00.371359 | orchestrator | 2026-01-13 00:15:00.371372 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-13 00:15:01.009053 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:01.009143 | orchestrator | 2026-01-13 00:15:01.009151 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-13 00:15:01.628003 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:01.628129 | orchestrator | 2026-01-13 00:15:01.628146 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-13 00:15:01.701748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-13 00:15:01.701833 | orchestrator | 2026-01-13 00:15:01.701848 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-13 00:15:02.913399 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-13 00:15:02.913499 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-13 00:15:02.913514 | orchestrator | 2026-01-13 00:15:02.913527 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-13 00:15:03.532409 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:03.532508 | orchestrator | 2026-01-13 00:15:03.532524 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-13 00:15:03.580545 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:15:03.580636 | orchestrator | 2026-01-13 00:15:03.580650 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-13 00:15:03.659716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-13 00:15:03.659794 | orchestrator | 2026-01-13 00:15:03.659804 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-13 00:15:04.260689 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:04.260767 | orchestrator | 2026-01-13 00:15:04.260807 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-13 00:15:04.325748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-13 00:15:04.325840 | orchestrator | 2026-01-13 00:15:04.325854 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-13 00:15:05.670822 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-13 00:15:05.670930 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-13 00:15:05.670946 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:05.670960 | orchestrator | 2026-01-13 00:15:05.670972 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-13 00:15:06.258188 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:06.258288 | orchestrator | 2026-01-13 00:15:06.258305 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-13 00:15:06.306571 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:15:06.306677 | orchestrator | 2026-01-13 00:15:06.306699 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-13 00:15:06.394548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-13 00:15:06.394631 | orchestrator | 2026-01-13 00:15:06.394660 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-13 00:15:06.919173 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:06.919263 | orchestrator | 2026-01-13 00:15:06.919278 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-13 00:15:07.330343 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:07.330425 | orchestrator | 2026-01-13 00:15:07.330436 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-13 00:15:08.511521 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-13 00:15:08.511634 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-13 00:15:08.511659 | orchestrator | 2026-01-13 00:15:08.511681 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-13 00:15:09.117955 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:09.118125 | orchestrator | 2026-01-13 00:15:09.118146 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-13 00:15:09.483569 | orchestrator | ok: [testbed-manager] 2026-01-13 00:15:09.483676 | orchestrator | 2026-01-13 00:15:09.483692 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-13 00:15:09.849226 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:09.849317 | orchestrator | 2026-01-13 00:15:09.849332 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-13 00:15:09.894270 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:15:09.894360 | orchestrator | 2026-01-13 00:15:09.894375 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-13 00:15:09.964207 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-13 00:15:09.964310 | orchestrator | 2026-01-13 00:15:09.964326 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-13 00:15:10.003886 | orchestrator | ok: [testbed-manager] 2026-01-13 00:15:10.003973 | orchestrator | 2026-01-13 00:15:10.003988 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-13 00:15:12.004879 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-13 00:15:12.004993 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-13 00:15:12.005014 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-13 00:15:12.005030 | orchestrator | 2026-01-13 00:15:12.005046 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-13 00:15:12.730764 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:12.730866 | orchestrator | 2026-01-13 00:15:12.730883 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-13 00:15:13.417219 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:13.417329 | orchestrator | 2026-01-13 00:15:13.417354 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-13 00:15:14.101692 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:14.101787 | orchestrator | 2026-01-13 00:15:14.101804 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-13 00:15:14.169247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-13 00:15:14.169341 | orchestrator | 2026-01-13 00:15:14.169357 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-13 00:15:14.210871 | orchestrator | ok: [testbed-manager] 2026-01-13 00:15:14.210961 | orchestrator | 2026-01-13 00:15:14.210976 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-13 00:15:14.917246 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-13 00:15:14.917345 | orchestrator | 2026-01-13 00:15:14.917362 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-13 00:15:14.988838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-13 00:15:14.988921 | orchestrator | 2026-01-13 00:15:14.988935 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-13 00:15:15.707763 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:15.707866 | orchestrator | 2026-01-13 00:15:15.707884 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-13 00:15:16.298193 | orchestrator | ok: [testbed-manager] 2026-01-13 00:15:16.298272 | orchestrator | 2026-01-13 00:15:16.298286 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-13 00:15:16.346671 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:15:16.346759 | orchestrator | 2026-01-13 00:15:16.346775 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-13 00:15:16.393962 | orchestrator | ok: [testbed-manager] 2026-01-13 00:15:16.394217 | orchestrator | 2026-01-13 00:15:16.394249 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-13 00:15:17.229429 | orchestrator | changed: [testbed-manager] 2026-01-13 00:15:17.229521 | orchestrator | 2026-01-13 00:15:17.229536 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-13 00:16:32.348809 | orchestrator | changed: [testbed-manager] 2026-01-13 00:16:32.348869 | orchestrator | 2026-01-13 00:16:32.348882 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-13 00:16:33.348899 | orchestrator | ok: [testbed-manager] 2026-01-13 00:16:33.348993 | orchestrator | 2026-01-13 00:16:33.349011 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-13 00:16:33.413302 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:16:33.413382 | orchestrator | 2026-01-13 00:16:33.413396 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-13 00:16:38.453842 | orchestrator | changed: [testbed-manager] 2026-01-13 00:16:38.453947 | orchestrator | 2026-01-13 00:16:38.453984 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-13 00:16:38.555485 | orchestrator | ok: [testbed-manager] 2026-01-13 00:16:38.555571 | orchestrator | 2026-01-13 00:16:38.555588 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-13 00:16:38.555602 | orchestrator | 2026-01-13 00:16:38.555613 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-13 00:16:38.606525 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:16:38.606596 | orchestrator | 2026-01-13 00:16:38.606605 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-13 00:17:38.664088 | orchestrator | Pausing for 60 seconds 2026-01-13 00:17:38.664265 | orchestrator | changed: [testbed-manager] 2026-01-13 00:17:38.664284 | orchestrator | 2026-01-13 00:17:38.664299 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-13 00:17:41.745596 | orchestrator | changed: [testbed-manager] 2026-01-13 00:17:41.745701 | orchestrator | 2026-01-13 00:17:41.745719 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-13 00:18:23.302119 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-13 00:18:23.302297 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-13 00:18:23.302313 | orchestrator | changed: [testbed-manager] 2026-01-13 00:18:23.302326 | orchestrator | 2026-01-13 00:18:23.302337 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-13 00:18:33.489363 | orchestrator | changed: [testbed-manager] 2026-01-13 00:18:33.489495 | orchestrator | 2026-01-13 00:18:33.489513 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-13 00:18:33.582644 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-13 00:18:33.582742 | orchestrator | 2026-01-13 00:18:33.582760 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-13 00:18:33.582776 | orchestrator | 2026-01-13 00:18:33.582790 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-13 00:18:33.639866 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:18:33.639933 | orchestrator | 2026-01-13 00:18:33.639940 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-13 00:18:33.712056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-13 00:18:33.712153 | orchestrator | 2026-01-13 00:18:33.712228 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-13 00:18:34.485123 | orchestrator | changed: [testbed-manager] 2026-01-13 00:18:34.485309 | orchestrator | 2026-01-13 00:18:34.485329 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-13 00:18:37.516581 | orchestrator | ok: [testbed-manager] 2026-01-13 00:18:37.516693 | orchestrator | 2026-01-13 00:18:37.516718 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-13 00:18:37.594630 | orchestrator | ok: [testbed-manager] => { 2026-01-13 00:18:37.594725 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-13 00:18:37.594740 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-13 00:18:37.594755 | orchestrator | "Checking running containers against expected versions...", 2026-01-13 00:18:37.594767 | orchestrator | "", 2026-01-13 00:18:37.594779 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-13 00:18:37.594790 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-13 00:18:37.594801 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.594812 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-13 00:18:37.594823 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.594834 | orchestrator | "", 2026-01-13 00:18:37.594845 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-13 00:18:37.594857 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-13 00:18:37.594868 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.594879 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-13 00:18:37.594889 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.594900 | orchestrator | "", 2026-01-13 00:18:37.594911 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-13 00:18:37.594922 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-13 00:18:37.594933 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.594944 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-13 00:18:37.594954 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.594966 | orchestrator | "", 2026-01-13 00:18:37.594977 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-13 00:18:37.594988 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-13 00:18:37.594999 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.595010 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-13 00:18:37.595021 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.595032 | orchestrator | "", 2026-01-13 00:18:37.595068 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-13 00:18:37.595079 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-13 00:18:37.595090 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.595101 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-13 00:18:37.595112 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.595122 | orchestrator | "", 2026-01-13 00:18:37.595133 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-13 00:18:37.595147 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-13 00:18:37.595261 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.595277 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-13 00:18:37.595290 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.595303 | orchestrator | "", 2026-01-13 00:18:37.595315 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-13 00:18:37.595327 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-13 00:18:37.595340 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.595353 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-13 00:18:37.595365 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.595378 | orchestrator | "", 2026-01-13 00:18:37.595390 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-13 00:18:37.595403 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-13 00:18:37.595415 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.595427 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-13 00:18:37.595449 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.595466 | orchestrator | "", 2026-01-13 00:18:37.595479 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-13 00:18:37.595492 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-13 00:18:37.595504 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.595515 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-13 00:18:37.595525 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.595536 | orchestrator | "", 2026-01-13 00:18:37.595547 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-13 00:18:37.595561 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-13 00:18:37.595579 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.595598 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-13 00:18:37.595617 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.595635 | orchestrator | "", 2026-01-13 00:18:37.595653 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-13 00:18:37.595672 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-13 00:18:37.595689 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.595707 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-13 00:18:37.595724 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.595742 | orchestrator | "", 2026-01-13 00:18:37.595761 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-13 00:18:37.595780 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-13 00:18:37.595799 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.595811 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-13 00:18:37.595822 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.595832 | orchestrator | "", 2026-01-13 00:18:37.595843 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-13 00:18:37.595854 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-13 00:18:37.595864 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.595875 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-13 00:18:37.595885 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.595896 | orchestrator | "", 2026-01-13 00:18:37.595907 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-13 00:18:37.595917 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-13 00:18:37.595940 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.595951 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-13 00:18:37.595962 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.595972 | orchestrator | "", 2026-01-13 00:18:37.595983 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-13 00:18:37.596014 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-13 00:18:37.596025 | orchestrator | " Enabled: true", 2026-01-13 00:18:37.596036 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-13 00:18:37.596046 | orchestrator | " Status: ✅ MATCH", 2026-01-13 00:18:37.596057 | orchestrator | "", 2026-01-13 00:18:37.596068 | orchestrator | "=== Summary ===", 2026-01-13 00:18:37.596078 | orchestrator | "Errors (version mismatches): 0", 2026-01-13 00:18:37.596089 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-13 00:18:37.596099 | orchestrator | "", 2026-01-13 00:18:37.596110 | orchestrator | "✅ All running containers match expected versions!" 2026-01-13 00:18:37.596120 | orchestrator | ] 2026-01-13 00:18:37.596131 | orchestrator | } 2026-01-13 00:18:37.596142 | orchestrator | 2026-01-13 00:18:37.596153 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-13 00:18:37.648417 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:18:37.648508 | orchestrator | 2026-01-13 00:18:37.648523 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:18:37.648537 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-13 00:18:37.648549 | orchestrator | 2026-01-13 00:18:37.743791 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-13 00:18:37.743908 | orchestrator | + deactivate 2026-01-13 00:18:37.743939 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-13 00:18:37.743963 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-13 00:18:37.743983 | orchestrator | + export PATH 2026-01-13 00:18:37.744003 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-13 00:18:37.744024 | orchestrator | + '[' -n '' ']' 2026-01-13 00:18:37.744043 | orchestrator | + hash -r 2026-01-13 00:18:37.744061 | orchestrator | + '[' -n '' ']' 2026-01-13 00:18:37.744079 | orchestrator | + unset VIRTUAL_ENV 2026-01-13 00:18:37.744097 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-13 00:18:37.744117 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-13 00:18:37.744136 | orchestrator | + unset -f deactivate 2026-01-13 00:18:37.744156 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-13 00:18:37.752222 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-13 00:18:37.752277 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-13 00:18:37.752292 | orchestrator | + local max_attempts=60 2026-01-13 00:18:37.752305 | orchestrator | + local name=ceph-ansible 2026-01-13 00:18:37.752317 | orchestrator | + local attempt_num=1 2026-01-13 00:18:37.752798 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:18:37.781414 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-13 00:18:37.781479 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-13 00:18:37.781491 | orchestrator | + local max_attempts=60 2026-01-13 00:18:37.781502 | orchestrator | + local name=kolla-ansible 2026-01-13 00:18:37.781512 | orchestrator | + local attempt_num=1 2026-01-13 00:18:37.782338 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-13 00:18:37.816205 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-13 00:18:37.816283 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-13 00:18:37.816295 | orchestrator | + local max_attempts=60 2026-01-13 00:18:37.816305 | orchestrator | + local name=osism-ansible 2026-01-13 00:18:37.816315 | orchestrator | + local attempt_num=1 2026-01-13 00:18:37.817371 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-13 00:18:37.855647 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-13 00:18:37.855731 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-13 00:18:37.855745 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-13 00:18:38.521158 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-13 00:18:38.690507 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-13 00:18:38.690630 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-01-13 00:18:38.690647 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-01-13 00:18:38.690659 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-01-13 00:18:38.690672 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-01-13 00:18:38.690683 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2026-01-13 00:18:38.690694 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2026-01-13 00:18:38.690722 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 57 seconds (healthy) 2026-01-13 00:18:38.690734 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2026-01-13 00:18:38.690745 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2026-01-13 00:18:38.690755 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2026-01-13 00:18:38.690766 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2026-01-13 00:18:38.690777 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-01-13 00:18:38.690788 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-01-13 00:18:38.690799 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-01-13 00:18:38.690810 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2026-01-13 00:18:38.696638 | orchestrator | ++ semver latest 7.0.0 2026-01-13 00:18:38.747303 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-13 00:18:38.747387 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-13 00:18:38.747402 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-13 00:18:38.751535 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-13 00:18:50.971755 | orchestrator | 2026-01-13 00:18:50 | INFO  | Task 3d37815b-b49d-49a6-90f4-18332c355b3c (resolvconf) was prepared for execution. 2026-01-13 00:18:50.971941 | orchestrator | 2026-01-13 00:18:50 | INFO  | It takes a moment until task 3d37815b-b49d-49a6-90f4-18332c355b3c (resolvconf) has been started and output is visible here. 2026-01-13 00:19:04.609824 | orchestrator | 2026-01-13 00:19:04.609941 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-13 00:19:04.609959 | orchestrator | 2026-01-13 00:19:04.609972 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-13 00:19:04.609984 | orchestrator | Tuesday 13 January 2026 00:18:55 +0000 (0:00:00.144) 0:00:00.144 ******* 2026-01-13 00:19:04.609995 | orchestrator | ok: [testbed-manager] 2026-01-13 00:19:04.610007 | orchestrator | 2026-01-13 00:19:04.610109 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-13 00:19:04.610125 | orchestrator | Tuesday 13 January 2026 00:18:58 +0000 (0:00:03.602) 0:00:03.746 ******* 2026-01-13 00:19:04.610136 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:19:04.610148 | orchestrator | 2026-01-13 00:19:04.610159 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-13 00:19:04.610170 | orchestrator | Tuesday 13 January 2026 00:18:58 +0000 (0:00:00.067) 0:00:03.814 ******* 2026-01-13 00:19:04.610206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-13 00:19:04.610218 | orchestrator | 2026-01-13 00:19:04.610229 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-13 00:19:04.610241 | orchestrator | Tuesday 13 January 2026 00:18:58 +0000 (0:00:00.089) 0:00:03.904 ******* 2026-01-13 00:19:04.610263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-13 00:19:04.610275 | orchestrator | 2026-01-13 00:19:04.610286 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-13 00:19:04.610297 | orchestrator | Tuesday 13 January 2026 00:18:58 +0000 (0:00:00.073) 0:00:03.978 ******* 2026-01-13 00:19:04.610307 | orchestrator | ok: [testbed-manager] 2026-01-13 00:19:04.610318 | orchestrator | 2026-01-13 00:19:04.610330 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-13 00:19:04.610340 | orchestrator | Tuesday 13 January 2026 00:19:00 +0000 (0:00:01.066) 0:00:05.045 ******* 2026-01-13 00:19:04.610352 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:19:04.610365 | orchestrator | 2026-01-13 00:19:04.610378 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-13 00:19:04.610391 | orchestrator | Tuesday 13 January 2026 00:19:00 +0000 (0:00:00.063) 0:00:05.108 ******* 2026-01-13 00:19:04.610403 | orchestrator | ok: [testbed-manager] 2026-01-13 00:19:04.610416 | orchestrator | 2026-01-13 00:19:04.610428 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-13 00:19:04.610441 | orchestrator | Tuesday 13 January 2026 00:19:00 +0000 (0:00:00.476) 0:00:05.585 ******* 2026-01-13 00:19:04.610453 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:19:04.610466 | orchestrator | 2026-01-13 00:19:04.610479 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-13 00:19:04.610492 | orchestrator | Tuesday 13 January 2026 00:19:00 +0000 (0:00:00.075) 0:00:05.660 ******* 2026-01-13 00:19:04.610505 | orchestrator | changed: [testbed-manager] 2026-01-13 00:19:04.610518 | orchestrator | 2026-01-13 00:19:04.610530 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-13 00:19:04.610542 | orchestrator | Tuesday 13 January 2026 00:19:01 +0000 (0:00:00.529) 0:00:06.190 ******* 2026-01-13 00:19:04.610555 | orchestrator | changed: [testbed-manager] 2026-01-13 00:19:04.610567 | orchestrator | 2026-01-13 00:19:04.610580 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-13 00:19:04.610590 | orchestrator | Tuesday 13 January 2026 00:19:02 +0000 (0:00:01.060) 0:00:07.250 ******* 2026-01-13 00:19:04.610601 | orchestrator | ok: [testbed-manager] 2026-01-13 00:19:04.610637 | orchestrator | 2026-01-13 00:19:04.610648 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-13 00:19:04.610659 | orchestrator | Tuesday 13 January 2026 00:19:03 +0000 (0:00:00.951) 0:00:08.201 ******* 2026-01-13 00:19:04.610670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-13 00:19:04.610681 | orchestrator | 2026-01-13 00:19:04.610692 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-13 00:19:04.610703 | orchestrator | Tuesday 13 January 2026 00:19:03 +0000 (0:00:00.086) 0:00:08.287 ******* 2026-01-13 00:19:04.610714 | orchestrator | changed: [testbed-manager] 2026-01-13 00:19:04.610725 | orchestrator | 2026-01-13 00:19:04.610736 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:19:04.610748 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-13 00:19:04.610759 | orchestrator | 2026-01-13 00:19:04.610769 | orchestrator | 2026-01-13 00:19:04.610780 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:19:04.610791 | orchestrator | Tuesday 13 January 2026 00:19:04 +0000 (0:00:01.139) 0:00:09.427 ******* 2026-01-13 00:19:04.610802 | orchestrator | =============================================================================== 2026-01-13 00:19:04.610813 | orchestrator | Gathering Facts --------------------------------------------------------- 3.60s 2026-01-13 00:19:04.610824 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2026-01-13 00:19:04.610834 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.07s 2026-01-13 00:19:04.610845 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.06s 2026-01-13 00:19:04.610856 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2026-01-13 00:19:04.610867 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2026-01-13 00:19:04.610895 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2026-01-13 00:19:04.610906 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-01-13 00:19:04.610917 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-01-13 00:19:04.610928 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-01-13 00:19:04.610939 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-01-13 00:19:04.610949 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-01-13 00:19:04.610960 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-01-13 00:19:04.890630 | orchestrator | + osism apply sshconfig 2026-01-13 00:19:16.941434 | orchestrator | 2026-01-13 00:19:16 | INFO  | Task 089338ef-ce47-42f2-90d9-dd2487ae976e (sshconfig) was prepared for execution. 2026-01-13 00:19:16.941552 | orchestrator | 2026-01-13 00:19:16 | INFO  | It takes a moment until task 089338ef-ce47-42f2-90d9-dd2487ae976e (sshconfig) has been started and output is visible here. 2026-01-13 00:19:27.934186 | orchestrator | 2026-01-13 00:19:27.934332 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-13 00:19:27.934349 | orchestrator | 2026-01-13 00:19:27.934361 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-13 00:19:27.934372 | orchestrator | Tuesday 13 January 2026 00:19:20 +0000 (0:00:00.155) 0:00:00.155 ******* 2026-01-13 00:19:27.934384 | orchestrator | ok: [testbed-manager] 2026-01-13 00:19:27.934396 | orchestrator | 2026-01-13 00:19:27.934407 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-13 00:19:27.934417 | orchestrator | Tuesday 13 January 2026 00:19:21 +0000 (0:00:00.564) 0:00:00.719 ******* 2026-01-13 00:19:27.934456 | orchestrator | changed: [testbed-manager] 2026-01-13 00:19:27.934468 | orchestrator | 2026-01-13 00:19:27.934479 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-13 00:19:27.934490 | orchestrator | Tuesday 13 January 2026 00:19:21 +0000 (0:00:00.400) 0:00:01.119 ******* 2026-01-13 00:19:27.934501 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-13 00:19:27.934512 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-13 00:19:27.934522 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-13 00:19:27.934533 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-13 00:19:27.934544 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-13 00:19:27.934554 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-13 00:19:27.934565 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-13 00:19:27.934575 | orchestrator | 2026-01-13 00:19:27.934586 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-13 00:19:27.934597 | orchestrator | Tuesday 13 January 2026 00:19:27 +0000 (0:00:05.136) 0:00:06.256 ******* 2026-01-13 00:19:27.934608 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:19:27.934618 | orchestrator | 2026-01-13 00:19:27.934629 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-13 00:19:27.934639 | orchestrator | Tuesday 13 January 2026 00:19:27 +0000 (0:00:00.081) 0:00:06.337 ******* 2026-01-13 00:19:27.934650 | orchestrator | changed: [testbed-manager] 2026-01-13 00:19:27.934666 | orchestrator | 2026-01-13 00:19:27.934685 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:19:27.934700 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:19:27.934713 | orchestrator | 2026-01-13 00:19:27.934726 | orchestrator | 2026-01-13 00:19:27.934738 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:19:27.934751 | orchestrator | Tuesday 13 January 2026 00:19:27 +0000 (0:00:00.575) 0:00:06.912 ******* 2026-01-13 00:19:27.934764 | orchestrator | =============================================================================== 2026-01-13 00:19:27.934776 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.14s 2026-01-13 00:19:27.934788 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2026-01-13 00:19:27.934801 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2026-01-13 00:19:27.934813 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.40s 2026-01-13 00:19:27.934825 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-01-13 00:19:28.220324 | orchestrator | + osism apply known-hosts 2026-01-13 00:19:40.222958 | orchestrator | 2026-01-13 00:19:40 | INFO  | Task 30ba9bc7-417f-4a1b-9d72-ca014fae44b4 (known-hosts) was prepared for execution. 2026-01-13 00:19:40.223074 | orchestrator | 2026-01-13 00:19:40 | INFO  | It takes a moment until task 30ba9bc7-417f-4a1b-9d72-ca014fae44b4 (known-hosts) has been started and output is visible here. 2026-01-13 00:19:56.515312 | orchestrator | 2026-01-13 00:19:56.515418 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-13 00:19:56.515435 | orchestrator | 2026-01-13 00:19:56.515448 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-13 00:19:56.515459 | orchestrator | Tuesday 13 January 2026 00:19:44 +0000 (0:00:00.122) 0:00:00.122 ******* 2026-01-13 00:19:56.515471 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-13 00:19:56.515483 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-13 00:19:56.515494 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-13 00:19:56.515505 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-13 00:19:56.515515 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-13 00:19:56.515551 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-13 00:19:56.515563 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-13 00:19:56.515573 | orchestrator | 2026-01-13 00:19:56.515584 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-13 00:19:56.515596 | orchestrator | Tuesday 13 January 2026 00:19:50 +0000 (0:00:05.749) 0:00:05.872 ******* 2026-01-13 00:19:56.515608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-13 00:19:56.515633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-13 00:19:56.515645 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-13 00:19:56.515656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-13 00:19:56.515667 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-13 00:19:56.515677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-13 00:19:56.515688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-13 00:19:56.515699 | orchestrator | 2026-01-13 00:19:56.515710 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-13 00:19:56.515720 | orchestrator | Tuesday 13 January 2026 00:19:50 +0000 (0:00:00.161) 0:00:06.033 ******* 2026-01-13 00:19:56.515732 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEwdqUB7HN4VL2b39E+wmGAVzyJqO2pg6Y5v1/wgoBcJnDKVoLinltTeLID6TqK0rNRIrSeCdJ7ghn9+OntawF8=) 2026-01-13 00:19:56.515747 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhG4+2lbFna6pQJ31DRXMxGyaPNBhYP+SgYYY5Vj+SwWIJllHlv7shVTlowhDNWJH4AM5sgwNoKcSveeq58h3JoZrMC54iWMBHf7+7MpWa/kdQq9he91PMOGXYjjoaF4+l0qRkMDWTLgNz8E1qQEcH2xX89F+GlZF/3Kb8EaAnLe2ckhBlt+dydNlJPgNWT5hjgHgHCeVYe2KKhlXogk4CFVQr5YwnRO0O0vOONQ2Ryefm146Wyld/tV/22YAA7LItrYkJNGl14yYQy38A5NQUnMgHzCMMFzrPhz/IVuy8TUKyds1DrVjv1ZQjCLM1V2Xn5ENanCKlDLy1Sg774GRHwyOvoER4GADruHMhUn7DFJQo3a6vr7rUFq8gtvV6L54EFVDiuO0XqztSa4uSirBPZv0G/jM9fLDt850YS+jN2nFQATOYQvt5ADRFqUOs1MUfP02mCECk4SOakrIPRINL/gIUrFAbah16IovwWreAg4EuzwOSi0mG5l1P4wrRztk=) 2026-01-13 00:19:56.515762 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG1wld3KLY273VgRhq3DY4Y5VreDQGtCBE/2+XB2y63y) 2026-01-13 00:19:56.515774 | orchestrator | 2026-01-13 00:19:56.515785 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-13 00:19:56.515798 | orchestrator | Tuesday 13 January 2026 00:19:51 +0000 (0:00:01.143) 0:00:07.177 ******* 2026-01-13 00:19:56.515810 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII3cBlxJ+Nj+vcX33DDsiEGmFzhWhUplm9v08bHaGxHd) 2026-01-13 00:19:56.515861 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMNy4Ll3tlu7tXNUF3crkWBEoB1g1PQzruThglC08FOVDD6ejeWmgWy/vre8E+ErGv22sgxBVVepNWHM6ektcqb92n6EoG348bB9Zc6/dZZEEDIMvkuDQujy4Q+enMJKou3EIC8IkKUz7Qwp5nIKTTm0mOLXkmrXMYyA5TfdPaYwfncOIOmbydM05j6gD4KbIxQabeevh29UxN2KeEQTBh0Aaiw/CJTPW/sObRcdr6PiyCWPiUVAZeFSzS2ESfTVBwO0CLhtc5+lG3Z8jS0Asnu+hH6BLqzoU2I07edidZM3QiAtugKAbmdzf7f1+a88PgwxBGqT4AeBrwN4BoNVQBWYcP3fwnASx9koirDoWHqT5As8SUc2Zi/AcNUsrVpgwhxpafm4SkF+gvu3v1gFxhJ3+2bFbdrKLlDwxMM0vOvqHYAF32q9PM5vY9ZsqaDY+R2kYw65yed75mlfT+z11I7mTLiJegVMIayC9Ms6ULIyqKkpuE35qJx+zeUHS0D/M=) 2026-01-13 00:19:56.515895 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCANVsfM+rupUfrqcS59qXGwJqieAokpdxEwmlCCPh+A0evjKfvhj+QDlIGxgTNnid4YWZD3owkOldDVz1czbl4=) 2026-01-13 00:19:56.515914 | orchestrator | 2026-01-13 00:19:56.515933 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-13 00:19:56.515952 | orchestrator | Tuesday 13 January 2026 00:19:52 +0000 (0:00:01.111) 0:00:08.288 ******* 2026-01-13 00:19:56.515973 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs4yVojS8g9mYfYUGVbXOe9f9Q4SCvWhW3wolwzYPHIv9Xbbq3IpBpMmRarHfb4EbZ7MLtSbcxXJIWOJ5uT/v1eyT5T+8N2wVHQcZDGLxEAXU6P/wxNX1zzO9k+aHpUchtOIy01JN2dXZs5Sy5k1ZSuceJa3W3bMEdGGZ2875Lx5KDP/ivLciez4rUjtCdI0zGw/yIFfZDmiCMNBf1J4CJkuRRtEPnGhvch3AwM/2ma2fJOCFqDynv4wsIAH+5wXXeGKmtJhApo2jdhDpXlx8dDw8iE638+4gHpBb7t6EnvtfAUbp1BPdruiocm85F7TrtzepT0F8F7bCHuCCsDhZ+7Lq5VyD1GBXQTy9T4txIbj1cfYd7pP1ztsCNGtsfs2Te2qzihxL09FSWaKXe3oH0kyyjs6oAGJEPUYIa14l0Nhwf/F/Y39NcTJcpKQO5cVItDmypoSsRW/FmwWSfj4qLgjvG6laR+WwQtQ2BHQKU5OEHMaUO4nPwpOaCldDAcHk=) 2026-01-13 00:19:56.515993 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIIb9uQuSUhF1yMMy/r7j8CSdaGWoRrCFZZtsbBuHk1lnHQp2c8Q2i0dbJwrz2NEaAHO2XIpenm+NecOa6v3p2U=) 2026-01-13 00:19:56.516014 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJDk/kf1I0kvvlU6paCuuF/kJCN7s38Ieq6GukgVT+0z) 2026-01-13 00:19:56.516035 | orchestrator | 2026-01-13 00:19:56.516055 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-13 00:19:56.516170 | orchestrator | Tuesday 13 January 2026 00:19:53 +0000 (0:00:01.034) 0:00:09.323 ******* 2026-01-13 00:19:56.516247 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBBT3kQanK+MPgr+KonMfs6XdQsNGu/kU8ttz39IgxJTK4uiJwzFrECUpR4sjnyGb7etmLer6xO0aafH+oD8Jc4=) 2026-01-13 00:19:56.516271 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSzAl+AST4boAzW56Ko76Dtd1M0bcRU4tyJFCjJH0MXVG7lRT6+RfPSA0kGrSjafm6IWd4oVyiHfYmxLxlw9qbyURve7RwylY9CnXoklCsk/mlNJ+OA6ev3zoe/1+oRcgf8zLlUgytux5eyVs0gQL0SqQ10VPq0h0zIJplqPKKhlbQQAxps722qV7ak0MWY6iet9q+AQvrnQUK9dUyhHjP12RIQaUUNtn7JJQT/wUNAk+1nNKSTrb9mnQd5Rf9ak/fxAzBHjTctwf8f5p5GURoEAwRbUikWILvHlP8adxBdS0LfmqhS0W8WyGSb70yReIKzWpIxbjf9nkEwaAYrAXsvwKSGZXz6wGtri/OvXaHpXd86ysh7zku38aMFCpkZNUPaok6j8AQ7Wwk/HgE6BM6DlVjjttr7tnmuiia7nH7El2/x06QeaTHBHD9LAyvSbfaLGr9RldoEHv3GrSeVgd6zmlsWAuJH+dMbbbii4vfL+snp5Uv+ZJYpM34r2S7Fp8=) 2026-01-13 00:19:56.516290 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKKhUM0juZlhzLj/K6dKqkuv5KvYz7H8RaW5TIFwpjIo) 2026-01-13 00:19:56.516307 | orchestrator | 2026-01-13 00:19:56.516325 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-13 00:19:56.516344 | orchestrator | Tuesday 13 January 2026 00:19:54 +0000 (0:00:01.023) 0:00:10.346 ******* 2026-01-13 00:19:56.516361 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDNRkg+yOke3Ddbei3PVLXSl/gJWdLCOPL0+k2kVStf6W6HnoSVqAA5dQQ3nXjW8qov3yO+0olH+j2KXOlaK3tY=) 2026-01-13 00:19:56.516379 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiIV9KgZPNLypT6BgMveQMLb+XP2pvowAeWbYSVy5SRDT4C/PTEvxgtPE0/a2Uq1ynUXQr9qfIMI1ISnuQ2P189RXwLdzXsuECbWwQRc+iTH+7NZOnsONukNVYPuQw772ZM+l/r6CdNv0Z0dYcm9DO7MQGAWSrEZoMCeTfvACeVD+SxZNioE13o0N+riGgnwJppHHMjh5ILfJ7eu9yfKGgqOij82UBLwmWSliga7vkTObjQ5UyfGeju1sf7wsx1akOGSgbFTFRUvaQPUAJJ5G862lRN5n+iXZvlytzFDs8UFgVnRDPSRD+96KQ9c2An7hpBMe1EJHbaaHWB4btrOcRrZY2dzbN3rrVKCTuf2v/Sju6B/hexQ582byuG6WUf7Q5mw2HV0yRct3CpB8rCwKy1IKdtrjncIQ47G5XAfpDWGufIG4h14ZgdO4Gb9zp8xRi11VauLn1YL2YuTVWqgbrjeCXA7gmsrLGpa2dOV1n/ByXe53b4DQr1gq0XufLzak=) 2026-01-13 00:19:56.516410 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILYwppQSAKDyEbxkq0PEPVz0XhqRpbAsz1hlj5fhJOjq) 2026-01-13 00:19:56.516427 | orchestrator | 2026-01-13 00:19:56.516443 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-13 00:19:56.516462 | orchestrator | Tuesday 13 January 2026 00:19:55 +0000 (0:00:00.974) 0:00:11.320 ******* 2026-01-13 00:19:56.516500 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBiN8QzGQv8ZHPfLAudwKKlhtdteTM1C6Cw2AUpga87mGiT8xUv81/alwHnXJq3Ey3e9/R5sm9J1abxgzuI1J8M7owK/mN4T2LlXSJHlagJNjQvfKlmxZz1l+kppPMKfXcaLTwzp9gQUsMjTj4UGuO/PM4ldgmbE/NhWO6afrVT/tJnFN+UM2UgJ5BCR2N7rtPBPmI00CQGWr/WPcjDIT9aFNrZR9wr4xJwhSDePHSUAN8QjR3AEfr9ihcSBM05ESY9JhmHgFHiWyYsoCclIMclmBl+c7Bb2Sb0PlBoEH6kaM+vWS68L4i7A1EHJ/muJco6MYUofRimAYxW2+fS+Ehb6KA9Cse2F5u/b5rRcD/ZOoRgnvbl/e4M7EYCUrzOE2exqKl1Bl6hyJz4gmBejh7/Hoh0vIKqRryeuojSyAr76CegIJeSVMXHuF2l3V6qZ28UaPelyYqGbyQZdSG4S5uJOt+Cwp0neF9Egk2Sr/wvzmo0CMhzcekdV1l0uwp8CM=) 2026-01-13 00:20:08.080265 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOpm2DX+iuhXBHOqADVnwtARbnSmUmbxHxRfUznOX5jn2moh3ojj7q8FUsWlwITOW4zsjw7yBh1DDEJH7LBlkQ4=) 2026-01-13 00:20:08.080373 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBRKL3VzcKtOBW81QKJd742xWys9tDvetvUB2cavjZrf) 2026-01-13 00:20:08.080398 | orchestrator | 2026-01-13 00:20:08.080419 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-13 00:20:08.080439 | orchestrator | Tuesday 13 January 2026 00:19:56 +0000 (0:00:00.973) 0:00:12.293 ******* 2026-01-13 00:20:08.080462 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2PZmaIQFk5fXdGfNQ+ybya9B5NctgaD8n6aoYA9hO92qMhg4UMz3L7ulQZarThvALWLN043WlfxapsN/zRnL6sK/yIXN+Em7Opi+v5NQCebcQB7xR0NB7dKLwW2ocTgjkNn8SkZ5scnUUh56VdCz+tpXorPuAGfbGPBuHjF35oiCOezymbJ4aeWj+wP/JeyE5XLWSS79QoosNLYfj2Yunul8lbPJPdSjXMXOp/DLuj/u/RM5jN0O5JU1digqgRnnQGu6TGu4le4RaDc9XozNlkrt6sN8fLwMC3oeIex0CWE1HoLS41VA6PHUlSabJ1xksQ0VpmontWHSIBuFvNAKG8gaFxIVlolm8T91B8wKx4F9hcZLgGvkhoAzancJ5itgN7xFopLQLa+D+GaFlta7DXGTTz4LllEb52vWB6U/4MIsaOJ3HDdss7MTR3vlK4mJgA0IZ8OPeitHhpeeycZPT8lumHDpN1f1W8d91App9YY4k4ZED5u052Tmh0fEV+RM=) 2026-01-13 00:20:08.080482 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPc5npEgejWX+Bx8E+UvNX2sIZ3DqVmvOFlDMHrWgyf787GSnJk71oQxAe6VCEQGIy0fCbdGZCnpoA9V/nv1UtQ=) 2026-01-13 00:20:08.080501 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGOl+o72g5WDnUdyk5PWe2VS1qTxhFD/4XNOHzgBM+4b) 2026-01-13 00:20:08.080518 | orchestrator | 2026-01-13 00:20:08.080537 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-13 00:20:08.080558 | orchestrator | Tuesday 13 January 2026 00:19:58 +0000 (0:00:02.030) 0:00:14.324 ******* 2026-01-13 00:20:08.080576 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-13 00:20:08.080595 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-13 00:20:08.080613 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-13 00:20:08.080632 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-13 00:20:08.080651 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-13 00:20:08.080669 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-13 00:20:08.080689 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-13 00:20:08.080706 | orchestrator | 2026-01-13 00:20:08.080743 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-13 00:20:08.080788 | orchestrator | Tuesday 13 January 2026 00:20:03 +0000 (0:00:05.221) 0:00:19.546 ******* 2026-01-13 00:20:08.080810 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-13 00:20:08.080830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-13 00:20:08.080848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-13 00:20:08.080868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-13 00:20:08.080889 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-13 00:20:08.080908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-13 00:20:08.080929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-13 00:20:08.080948 | orchestrator | 2026-01-13 00:20:08.080964 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-13 00:20:08.080976 | orchestrator | Tuesday 13 January 2026 00:20:03 +0000 (0:00:00.166) 0:00:19.713 ******* 2026-01-13 00:20:08.080987 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEwdqUB7HN4VL2b39E+wmGAVzyJqO2pg6Y5v1/wgoBcJnDKVoLinltTeLID6TqK0rNRIrSeCdJ7ghn9+OntawF8=) 2026-01-13 00:20:08.081021 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhG4+2lbFna6pQJ31DRXMxGyaPNBhYP+SgYYY5Vj+SwWIJllHlv7shVTlowhDNWJH4AM5sgwNoKcSveeq58h3JoZrMC54iWMBHf7+7MpWa/kdQq9he91PMOGXYjjoaF4+l0qRkMDWTLgNz8E1qQEcH2xX89F+GlZF/3Kb8EaAnLe2ckhBlt+dydNlJPgNWT5hjgHgHCeVYe2KKhlXogk4CFVQr5YwnRO0O0vOONQ2Ryefm146Wyld/tV/22YAA7LItrYkJNGl14yYQy38A5NQUnMgHzCMMFzrPhz/IVuy8TUKyds1DrVjv1ZQjCLM1V2Xn5ENanCKlDLy1Sg774GRHwyOvoER4GADruHMhUn7DFJQo3a6vr7rUFq8gtvV6L54EFVDiuO0XqztSa4uSirBPZv0G/jM9fLDt850YS+jN2nFQATOYQvt5ADRFqUOs1MUfP02mCECk4SOakrIPRINL/gIUrFAbah16IovwWreAg4EuzwOSi0mG5l1P4wrRztk=) 2026-01-13 00:20:08.081035 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG1wld3KLY273VgRhq3DY4Y5VreDQGtCBE/2+XB2y63y) 2026-01-13 00:20:08.081046 | orchestrator | 2026-01-13 00:20:08.081058 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-13 00:20:08.081070 | orchestrator | Tuesday 13 January 2026 00:20:04 +0000 (0:00:01.018) 0:00:20.731 ******* 2026-01-13 00:20:08.081081 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII3cBlxJ+Nj+vcX33DDsiEGmFzhWhUplm9v08bHaGxHd) 2026-01-13 00:20:08.081094 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMNy4Ll3tlu7tXNUF3crkWBEoB1g1PQzruThglC08FOVDD6ejeWmgWy/vre8E+ErGv22sgxBVVepNWHM6ektcqb92n6EoG348bB9Zc6/dZZEEDIMvkuDQujy4Q+enMJKou3EIC8IkKUz7Qwp5nIKTTm0mOLXkmrXMYyA5TfdPaYwfncOIOmbydM05j6gD4KbIxQabeevh29UxN2KeEQTBh0Aaiw/CJTPW/sObRcdr6PiyCWPiUVAZeFSzS2ESfTVBwO0CLhtc5+lG3Z8jS0Asnu+hH6BLqzoU2I07edidZM3QiAtugKAbmdzf7f1+a88PgwxBGqT4AeBrwN4BoNVQBWYcP3fwnASx9koirDoWHqT5As8SUc2Zi/AcNUsrVpgwhxpafm4SkF+gvu3v1gFxhJ3+2bFbdrKLlDwxMM0vOvqHYAF32q9PM5vY9ZsqaDY+R2kYw65yed75mlfT+z11I7mTLiJegVMIayC9Ms6ULIyqKkpuE35qJx+zeUHS0D/M=) 2026-01-13 00:20:08.081107 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCANVsfM+rupUfrqcS59qXGwJqieAokpdxEwmlCCPh+A0evjKfvhj+QDlIGxgTNnid4YWZD3owkOldDVz1czbl4=) 2026-01-13 00:20:08.081126 | orchestrator | 2026-01-13 00:20:08.081136 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-13 00:20:08.081146 | orchestrator | Tuesday 13 January 2026 00:20:05 +0000 (0:00:01.026) 0:00:21.758 ******* 2026-01-13 00:20:08.081156 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs4yVojS8g9mYfYUGVbXOe9f9Q4SCvWhW3wolwzYPHIv9Xbbq3IpBpMmRarHfb4EbZ7MLtSbcxXJIWOJ5uT/v1eyT5T+8N2wVHQcZDGLxEAXU6P/wxNX1zzO9k+aHpUchtOIy01JN2dXZs5Sy5k1ZSuceJa3W3bMEdGGZ2875Lx5KDP/ivLciez4rUjtCdI0zGw/yIFfZDmiCMNBf1J4CJkuRRtEPnGhvch3AwM/2ma2fJOCFqDynv4wsIAH+5wXXeGKmtJhApo2jdhDpXlx8dDw8iE638+4gHpBb7t6EnvtfAUbp1BPdruiocm85F7TrtzepT0F8F7bCHuCCsDhZ+7Lq5VyD1GBXQTy9T4txIbj1cfYd7pP1ztsCNGtsfs2Te2qzihxL09FSWaKXe3oH0kyyjs6oAGJEPUYIa14l0Nhwf/F/Y39NcTJcpKQO5cVItDmypoSsRW/FmwWSfj4qLgjvG6laR+WwQtQ2BHQKU5OEHMaUO4nPwpOaCldDAcHk=) 2026-01-13 00:20:08.081166 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIIb9uQuSUhF1yMMy/r7j8CSdaGWoRrCFZZtsbBuHk1lnHQp2c8Q2i0dbJwrz2NEaAHO2XIpenm+NecOa6v3p2U=) 2026-01-13 00:20:08.081176 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJDk/kf1I0kvvlU6paCuuF/kJCN7s38Ieq6GukgVT+0z) 2026-01-13 00:20:08.081186 | orchestrator | 2026-01-13 00:20:08.081196 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-13 00:20:08.081265 | orchestrator | Tuesday 13 January 2026 00:20:07 +0000 (0:00:01.050) 0:00:22.808 ******* 2026-01-13 00:20:08.081278 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBBT3kQanK+MPgr+KonMfs6XdQsNGu/kU8ttz39IgxJTK4uiJwzFrECUpR4sjnyGb7etmLer6xO0aafH+oD8Jc4=) 2026-01-13 00:20:08.081297 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSzAl+AST4boAzW56Ko76Dtd1M0bcRU4tyJFCjJH0MXVG7lRT6+RfPSA0kGrSjafm6IWd4oVyiHfYmxLxlw9qbyURve7RwylY9CnXoklCsk/mlNJ+OA6ev3zoe/1+oRcgf8zLlUgytux5eyVs0gQL0SqQ10VPq0h0zIJplqPKKhlbQQAxps722qV7ak0MWY6iet9q+AQvrnQUK9dUyhHjP12RIQaUUNtn7JJQT/wUNAk+1nNKSTrb9mnQd5Rf9ak/fxAzBHjTctwf8f5p5GURoEAwRbUikWILvHlP8adxBdS0LfmqhS0W8WyGSb70yReIKzWpIxbjf9nkEwaAYrAXsvwKSGZXz6wGtri/OvXaHpXd86ysh7zku38aMFCpkZNUPaok6j8AQ7Wwk/HgE6BM6DlVjjttr7tnmuiia7nH7El2/x06QeaTHBHD9LAyvSbfaLGr9RldoEHv3GrSeVgd6zmlsWAuJH+dMbbbii4vfL+snp5Uv+ZJYpM34r2S7Fp8=) 2026-01-13 00:20:08.081323 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKKhUM0juZlhzLj/K6dKqkuv5KvYz7H8RaW5TIFwpjIo) 2026-01-13 00:20:12.436453 | orchestrator | 2026-01-13 00:20:12.436540 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-13 00:20:12.436555 | orchestrator | Tuesday 13 January 2026 00:20:08 +0000 (0:00:01.046) 0:00:23.855 ******* 2026-01-13 00:20:12.436567 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDNRkg+yOke3Ddbei3PVLXSl/gJWdLCOPL0+k2kVStf6W6HnoSVqAA5dQQ3nXjW8qov3yO+0olH+j2KXOlaK3tY=) 2026-01-13 00:20:12.436582 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiIV9KgZPNLypT6BgMveQMLb+XP2pvowAeWbYSVy5SRDT4C/PTEvxgtPE0/a2Uq1ynUXQr9qfIMI1ISnuQ2P189RXwLdzXsuECbWwQRc+iTH+7NZOnsONukNVYPuQw772ZM+l/r6CdNv0Z0dYcm9DO7MQGAWSrEZoMCeTfvACeVD+SxZNioE13o0N+riGgnwJppHHMjh5ILfJ7eu9yfKGgqOij82UBLwmWSliga7vkTObjQ5UyfGeju1sf7wsx1akOGSgbFTFRUvaQPUAJJ5G862lRN5n+iXZvlytzFDs8UFgVnRDPSRD+96KQ9c2An7hpBMe1EJHbaaHWB4btrOcRrZY2dzbN3rrVKCTuf2v/Sju6B/hexQ582byuG6WUf7Q5mw2HV0yRct3CpB8rCwKy1IKdtrjncIQ47G5XAfpDWGufIG4h14ZgdO4Gb9zp8xRi11VauLn1YL2YuTVWqgbrjeCXA7gmsrLGpa2dOV1n/ByXe53b4DQr1gq0XufLzak=) 2026-01-13 00:20:12.436596 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILYwppQSAKDyEbxkq0PEPVz0XhqRpbAsz1hlj5fhJOjq) 2026-01-13 00:20:12.436630 | orchestrator | 2026-01-13 00:20:12.436642 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-13 00:20:12.436653 | orchestrator | Tuesday 13 January 2026 00:20:09 +0000 (0:00:01.059) 0:00:24.915 ******* 2026-01-13 00:20:12.436677 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBRKL3VzcKtOBW81QKJd742xWys9tDvetvUB2cavjZrf) 2026-01-13 00:20:12.436689 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBiN8QzGQv8ZHPfLAudwKKlhtdteTM1C6Cw2AUpga87mGiT8xUv81/alwHnXJq3Ey3e9/R5sm9J1abxgzuI1J8M7owK/mN4T2LlXSJHlagJNjQvfKlmxZz1l+kppPMKfXcaLTwzp9gQUsMjTj4UGuO/PM4ldgmbE/NhWO6afrVT/tJnFN+UM2UgJ5BCR2N7rtPBPmI00CQGWr/WPcjDIT9aFNrZR9wr4xJwhSDePHSUAN8QjR3AEfr9ihcSBM05ESY9JhmHgFHiWyYsoCclIMclmBl+c7Bb2Sb0PlBoEH6kaM+vWS68L4i7A1EHJ/muJco6MYUofRimAYxW2+fS+Ehb6KA9Cse2F5u/b5rRcD/ZOoRgnvbl/e4M7EYCUrzOE2exqKl1Bl6hyJz4gmBejh7/Hoh0vIKqRryeuojSyAr76CegIJeSVMXHuF2l3V6qZ28UaPelyYqGbyQZdSG4S5uJOt+Cwp0neF9Egk2Sr/wvzmo0CMhzcekdV1l0uwp8CM=) 2026-01-13 00:20:12.436701 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOpm2DX+iuhXBHOqADVnwtARbnSmUmbxHxRfUznOX5jn2moh3ojj7q8FUsWlwITOW4zsjw7yBh1DDEJH7LBlkQ4=) 2026-01-13 00:20:12.436712 | orchestrator | 2026-01-13 00:20:12.436723 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-13 00:20:12.436734 | orchestrator | Tuesday 13 January 2026 00:20:10 +0000 (0:00:01.040) 0:00:25.955 ******* 2026-01-13 00:20:12.436745 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2PZmaIQFk5fXdGfNQ+ybya9B5NctgaD8n6aoYA9hO92qMhg4UMz3L7ulQZarThvALWLN043WlfxapsN/zRnL6sK/yIXN+Em7Opi+v5NQCebcQB7xR0NB7dKLwW2ocTgjkNn8SkZ5scnUUh56VdCz+tpXorPuAGfbGPBuHjF35oiCOezymbJ4aeWj+wP/JeyE5XLWSS79QoosNLYfj2Yunul8lbPJPdSjXMXOp/DLuj/u/RM5jN0O5JU1digqgRnnQGu6TGu4le4RaDc9XozNlkrt6sN8fLwMC3oeIex0CWE1HoLS41VA6PHUlSabJ1xksQ0VpmontWHSIBuFvNAKG8gaFxIVlolm8T91B8wKx4F9hcZLgGvkhoAzancJ5itgN7xFopLQLa+D+GaFlta7DXGTTz4LllEb52vWB6U/4MIsaOJ3HDdss7MTR3vlK4mJgA0IZ8OPeitHhpeeycZPT8lumHDpN1f1W8d91App9YY4k4ZED5u052Tmh0fEV+RM=) 2026-01-13 00:20:12.436756 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGOl+o72g5WDnUdyk5PWe2VS1qTxhFD/4XNOHzgBM+4b) 2026-01-13 00:20:12.436767 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPc5npEgejWX+Bx8E+UvNX2sIZ3DqVmvOFlDMHrWgyf787GSnJk71oQxAe6VCEQGIy0fCbdGZCnpoA9V/nv1UtQ=) 2026-01-13 00:20:12.436778 | orchestrator | 2026-01-13 00:20:12.436789 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-13 00:20:12.436799 | orchestrator | Tuesday 13 January 2026 00:20:11 +0000 (0:00:01.083) 0:00:27.038 ******* 2026-01-13 00:20:12.436810 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-13 00:20:12.436821 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-13 00:20:12.436832 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-13 00:20:12.436842 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-13 00:20:12.436853 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-13 00:20:12.436863 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-13 00:20:12.436874 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-13 00:20:12.436884 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:20:12.436895 | orchestrator | 2026-01-13 00:20:12.436921 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-13 00:20:12.436932 | orchestrator | Tuesday 13 January 2026 00:20:11 +0000 (0:00:00.156) 0:00:27.195 ******* 2026-01-13 00:20:12.436943 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:20:12.436954 | orchestrator | 2026-01-13 00:20:12.436965 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-13 00:20:12.436983 | orchestrator | Tuesday 13 January 2026 00:20:11 +0000 (0:00:00.066) 0:00:27.261 ******* 2026-01-13 00:20:12.436994 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:20:12.437004 | orchestrator | 2026-01-13 00:20:12.437015 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-13 00:20:12.437026 | orchestrator | Tuesday 13 January 2026 00:20:11 +0000 (0:00:00.055) 0:00:27.317 ******* 2026-01-13 00:20:12.437037 | orchestrator | changed: [testbed-manager] 2026-01-13 00:20:12.437047 | orchestrator | 2026-01-13 00:20:12.437058 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:20:12.437070 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-13 00:20:12.437081 | orchestrator | 2026-01-13 00:20:12.437092 | orchestrator | 2026-01-13 00:20:12.437103 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:20:12.437114 | orchestrator | Tuesday 13 January 2026 00:20:12 +0000 (0:00:00.703) 0:00:28.020 ******* 2026-01-13 00:20:12.437124 | orchestrator | =============================================================================== 2026-01-13 00:20:12.437135 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.75s 2026-01-13 00:20:12.437146 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.22s 2026-01-13 00:20:12.437157 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.03s 2026-01-13 00:20:12.437168 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-13 00:20:12.437178 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-01-13 00:20:12.437189 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-01-13 00:20:12.437200 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-13 00:20:12.437230 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-13 00:20:12.437241 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-13 00:20:12.437251 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-13 00:20:12.437262 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-01-13 00:20:12.437273 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-01-13 00:20:12.437284 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-01-13 00:20:12.437294 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-01-13 00:20:12.437312 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-01-13 00:20:12.437323 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-01-13 00:20:12.437334 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.70s 2026-01-13 00:20:12.437345 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-01-13 00:20:12.437356 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-01-13 00:20:12.437367 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-01-13 00:20:12.719344 | orchestrator | + osism apply squid 2026-01-13 00:20:24.809050 | orchestrator | 2026-01-13 00:20:24 | INFO  | Task e69aa7d1-79c0-4f37-a147-7b4d8d64571b (squid) was prepared for execution. 2026-01-13 00:20:24.809158 | orchestrator | 2026-01-13 00:20:24 | INFO  | It takes a moment until task e69aa7d1-79c0-4f37-a147-7b4d8d64571b (squid) has been started and output is visible here. 2026-01-13 00:22:20.901199 | orchestrator | 2026-01-13 00:22:20.901335 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-13 00:22:20.901353 | orchestrator | 2026-01-13 00:22:20.901390 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-13 00:22:20.901404 | orchestrator | Tuesday 13 January 2026 00:20:28 +0000 (0:00:00.174) 0:00:00.174 ******* 2026-01-13 00:22:20.901416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-13 00:22:20.901428 | orchestrator | 2026-01-13 00:22:20.901439 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-13 00:22:20.901450 | orchestrator | Tuesday 13 January 2026 00:20:28 +0000 (0:00:00.085) 0:00:00.260 ******* 2026-01-13 00:22:20.901461 | orchestrator | ok: [testbed-manager] 2026-01-13 00:22:20.901472 | orchestrator | 2026-01-13 00:22:20.901483 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-13 00:22:20.901494 | orchestrator | Tuesday 13 January 2026 00:20:30 +0000 (0:00:01.433) 0:00:01.694 ******* 2026-01-13 00:22:20.901505 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-13 00:22:20.901516 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-13 00:22:20.901526 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-13 00:22:20.901537 | orchestrator | 2026-01-13 00:22:20.901548 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-13 00:22:20.901558 | orchestrator | Tuesday 13 January 2026 00:20:31 +0000 (0:00:01.121) 0:00:02.816 ******* 2026-01-13 00:22:20.901569 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-13 00:22:20.901580 | orchestrator | 2026-01-13 00:22:20.901591 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-13 00:22:20.901601 | orchestrator | Tuesday 13 January 2026 00:20:32 +0000 (0:00:01.093) 0:00:03.910 ******* 2026-01-13 00:22:20.901612 | orchestrator | ok: [testbed-manager] 2026-01-13 00:22:20.901622 | orchestrator | 2026-01-13 00:22:20.901633 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-13 00:22:20.901644 | orchestrator | Tuesday 13 January 2026 00:20:32 +0000 (0:00:00.316) 0:00:04.226 ******* 2026-01-13 00:22:20.901654 | orchestrator | changed: [testbed-manager] 2026-01-13 00:22:20.901665 | orchestrator | 2026-01-13 00:22:20.901676 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-13 00:22:20.901686 | orchestrator | Tuesday 13 January 2026 00:20:33 +0000 (0:00:00.901) 0:00:05.128 ******* 2026-01-13 00:22:20.901697 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-13 00:22:20.901708 | orchestrator | ok: [testbed-manager] 2026-01-13 00:22:20.901719 | orchestrator | 2026-01-13 00:22:20.901729 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-13 00:22:20.901740 | orchestrator | Tuesday 13 January 2026 00:21:04 +0000 (0:00:30.388) 0:00:35.516 ******* 2026-01-13 00:22:20.901752 | orchestrator | changed: [testbed-manager] 2026-01-13 00:22:20.901764 | orchestrator | 2026-01-13 00:22:20.901776 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-13 00:22:20.901789 | orchestrator | Tuesday 13 January 2026 00:21:19 +0000 (0:00:15.719) 0:00:51.236 ******* 2026-01-13 00:22:20.901802 | orchestrator | Pausing for 60 seconds 2026-01-13 00:22:20.901815 | orchestrator | changed: [testbed-manager] 2026-01-13 00:22:20.901827 | orchestrator | 2026-01-13 00:22:20.901839 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-13 00:22:20.901851 | orchestrator | Tuesday 13 January 2026 00:22:20 +0000 (0:01:00.081) 0:01:51.317 ******* 2026-01-13 00:22:20.901863 | orchestrator | ok: [testbed-manager] 2026-01-13 00:22:20.901875 | orchestrator | 2026-01-13 00:22:20.901887 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-13 00:22:20.901899 | orchestrator | Tuesday 13 January 2026 00:22:20 +0000 (0:00:00.067) 0:01:51.385 ******* 2026-01-13 00:22:20.901911 | orchestrator | changed: [testbed-manager] 2026-01-13 00:22:20.901923 | orchestrator | 2026-01-13 00:22:20.901935 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:22:20.901955 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:22:20.901968 | orchestrator | 2026-01-13 00:22:20.901980 | orchestrator | 2026-01-13 00:22:20.901993 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:22:20.902005 | orchestrator | Tuesday 13 January 2026 00:22:20 +0000 (0:00:00.568) 0:01:51.954 ******* 2026-01-13 00:22:20.902073 | orchestrator | =============================================================================== 2026-01-13 00:22:20.902086 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-01-13 00:22:20.902098 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.39s 2026-01-13 00:22:20.902110 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.72s 2026-01-13 00:22:20.902120 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.43s 2026-01-13 00:22:20.902131 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.12s 2026-01-13 00:22:20.902142 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.09s 2026-01-13 00:22:20.902152 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.90s 2026-01-13 00:22:20.902163 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.57s 2026-01-13 00:22:20.902173 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.32s 2026-01-13 00:22:20.902184 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-01-13 00:22:20.902195 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-01-13 00:22:21.192398 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-13 00:22:21.192491 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-13 00:22:21.197639 | orchestrator | + set -e 2026-01-13 00:22:21.197678 | orchestrator | + NAMESPACE=kolla 2026-01-13 00:22:21.197692 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-13 00:22:21.204579 | orchestrator | ++ semver latest 9.0.0 2026-01-13 00:22:21.259829 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-13 00:22:21.259966 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-13 00:22:21.259993 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-13 00:22:33.286675 | orchestrator | 2026-01-13 00:22:33 | INFO  | Task 06e949a0-1f23-46bf-b910-fb634889464f (operator) was prepared for execution. 2026-01-13 00:22:33.286772 | orchestrator | 2026-01-13 00:22:33 | INFO  | It takes a moment until task 06e949a0-1f23-46bf-b910-fb634889464f (operator) has been started and output is visible here. 2026-01-13 00:22:49.749515 | orchestrator | 2026-01-13 00:22:49.749621 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-13 00:22:49.749636 | orchestrator | 2026-01-13 00:22:49.749647 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-13 00:22:49.749658 | orchestrator | Tuesday 13 January 2026 00:22:37 +0000 (0:00:00.135) 0:00:00.135 ******* 2026-01-13 00:22:49.749668 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:22:49.749679 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:22:49.749688 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:22:49.749699 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:22:49.749708 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:22:49.749717 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:22:49.749727 | orchestrator | 2026-01-13 00:22:49.749737 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-13 00:22:49.749750 | orchestrator | Tuesday 13 January 2026 00:22:41 +0000 (0:00:04.319) 0:00:04.455 ******* 2026-01-13 00:22:49.749760 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:22:49.749770 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:22:49.749779 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:22:49.749788 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:22:49.749798 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:22:49.749828 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:22:49.749838 | orchestrator | 2026-01-13 00:22:49.749848 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-13 00:22:49.749857 | orchestrator | 2026-01-13 00:22:49.749867 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-13 00:22:49.749876 | orchestrator | Tuesday 13 January 2026 00:22:42 +0000 (0:00:00.766) 0:00:05.221 ******* 2026-01-13 00:22:49.749886 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:22:49.749895 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:22:49.749905 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:22:49.749914 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:22:49.749923 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:22:49.749932 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:22:49.749942 | orchestrator | 2026-01-13 00:22:49.749951 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-13 00:22:49.749961 | orchestrator | Tuesday 13 January 2026 00:22:42 +0000 (0:00:00.184) 0:00:05.405 ******* 2026-01-13 00:22:49.749970 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:22:49.749979 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:22:49.749989 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:22:49.749998 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:22:49.750007 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:22:49.750121 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:22:49.750142 | orchestrator | 2026-01-13 00:22:49.750155 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-13 00:22:49.750168 | orchestrator | Tuesday 13 January 2026 00:22:42 +0000 (0:00:00.170) 0:00:05.576 ******* 2026-01-13 00:22:49.750199 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:22:49.750217 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:22:49.750257 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:22:49.750273 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:22:49.750286 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:22:49.750298 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:22:49.750311 | orchestrator | 2026-01-13 00:22:49.750323 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-13 00:22:49.750336 | orchestrator | Tuesday 13 January 2026 00:22:43 +0000 (0:00:00.631) 0:00:06.208 ******* 2026-01-13 00:22:49.750348 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:22:49.750361 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:22:49.750373 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:22:49.750385 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:22:49.750397 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:22:49.750410 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:22:49.750422 | orchestrator | 2026-01-13 00:22:49.750433 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-13 00:22:49.750443 | orchestrator | Tuesday 13 January 2026 00:22:44 +0000 (0:00:00.749) 0:00:06.957 ******* 2026-01-13 00:22:49.750454 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-13 00:22:49.750466 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-13 00:22:49.750476 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-13 00:22:49.750487 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-13 00:22:49.750498 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-13 00:22:49.750508 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-13 00:22:49.750519 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-13 00:22:49.750529 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-13 00:22:49.750540 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-13 00:22:49.750551 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-13 00:22:49.750561 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-13 00:22:49.750572 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-13 00:22:49.750583 | orchestrator | 2026-01-13 00:22:49.750593 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-13 00:22:49.750614 | orchestrator | Tuesday 13 January 2026 00:22:45 +0000 (0:00:01.220) 0:00:08.177 ******* 2026-01-13 00:22:49.750625 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:22:49.750636 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:22:49.750646 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:22:49.750657 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:22:49.750668 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:22:49.750678 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:22:49.750689 | orchestrator | 2026-01-13 00:22:49.750700 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-13 00:22:49.750712 | orchestrator | Tuesday 13 January 2026 00:22:46 +0000 (0:00:01.165) 0:00:09.343 ******* 2026-01-13 00:22:49.750722 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-13 00:22:49.750733 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-13 00:22:49.750744 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-13 00:22:49.750755 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-13 00:22:49.750786 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-13 00:22:49.750798 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-13 00:22:49.750808 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-13 00:22:49.750819 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-13 00:22:49.750830 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-13 00:22:49.750840 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-13 00:22:49.750851 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-13 00:22:49.750862 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-13 00:22:49.750872 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-13 00:22:49.750883 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-13 00:22:49.750894 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-13 00:22:49.750904 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-13 00:22:49.750915 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-13 00:22:49.750925 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-13 00:22:49.750936 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-13 00:22:49.750947 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-13 00:22:49.750957 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-13 00:22:49.750968 | orchestrator | 2026-01-13 00:22:49.750978 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-13 00:22:49.750990 | orchestrator | Tuesday 13 January 2026 00:22:47 +0000 (0:00:01.204) 0:00:10.547 ******* 2026-01-13 00:22:49.751001 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:22:49.751012 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:22:49.751022 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:22:49.751033 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:22:49.751043 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:22:49.751054 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:22:49.751065 | orchestrator | 2026-01-13 00:22:49.751075 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-13 00:22:49.751086 | orchestrator | Tuesday 13 January 2026 00:22:47 +0000 (0:00:00.152) 0:00:10.699 ******* 2026-01-13 00:22:49.751097 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:22:49.751108 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:22:49.751118 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:22:49.751129 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:22:49.751139 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:22:49.751156 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:22:49.751167 | orchestrator | 2026-01-13 00:22:49.751178 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-13 00:22:49.751189 | orchestrator | Tuesday 13 January 2026 00:22:48 +0000 (0:00:00.169) 0:00:10.869 ******* 2026-01-13 00:22:49.751199 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:22:49.751210 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:22:49.751220 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:22:49.751256 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:22:49.751274 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:22:49.751286 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:22:49.751296 | orchestrator | 2026-01-13 00:22:49.751307 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-13 00:22:49.751318 | orchestrator | Tuesday 13 January 2026 00:22:48 +0000 (0:00:00.592) 0:00:11.461 ******* 2026-01-13 00:22:49.751328 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:22:49.751339 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:22:49.751350 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:22:49.751360 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:22:49.751371 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:22:49.751381 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:22:49.751392 | orchestrator | 2026-01-13 00:22:49.751402 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-13 00:22:49.751413 | orchestrator | Tuesday 13 January 2026 00:22:48 +0000 (0:00:00.151) 0:00:11.613 ******* 2026-01-13 00:22:49.751424 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-13 00:22:49.751435 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:22:49.751446 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-13 00:22:49.751457 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-13 00:22:49.751468 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:22:49.751478 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:22:49.751489 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-13 00:22:49.751500 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-13 00:22:49.751510 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:22:49.751521 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-13 00:22:49.751532 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:22:49.751542 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:22:49.751553 | orchestrator | 2026-01-13 00:22:49.751564 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-13 00:22:49.751575 | orchestrator | Tuesday 13 January 2026 00:22:49 +0000 (0:00:00.686) 0:00:12.300 ******* 2026-01-13 00:22:49.751585 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:22:49.751596 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:22:49.751607 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:22:49.751617 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:22:49.751628 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:22:49.751638 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:22:49.751649 | orchestrator | 2026-01-13 00:22:49.751660 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-13 00:22:49.751670 | orchestrator | Tuesday 13 January 2026 00:22:49 +0000 (0:00:00.152) 0:00:12.452 ******* 2026-01-13 00:22:49.751681 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:22:49.751692 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:22:49.751702 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:22:49.751713 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:22:49.751731 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:22:51.129197 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:22:51.129412 | orchestrator | 2026-01-13 00:22:51.129435 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-13 00:22:51.129449 | orchestrator | Tuesday 13 January 2026 00:22:49 +0000 (0:00:00.150) 0:00:12.603 ******* 2026-01-13 00:22:51.129489 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:22:51.129501 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:22:51.129512 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:22:51.129522 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:22:51.129533 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:22:51.129544 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:22:51.129554 | orchestrator | 2026-01-13 00:22:51.129565 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-13 00:22:51.129577 | orchestrator | Tuesday 13 January 2026 00:22:49 +0000 (0:00:00.152) 0:00:12.756 ******* 2026-01-13 00:22:51.129587 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:22:51.129598 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:22:51.129609 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:22:51.129619 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:22:51.129630 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:22:51.129640 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:22:51.129650 | orchestrator | 2026-01-13 00:22:51.129661 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-13 00:22:51.129672 | orchestrator | Tuesday 13 January 2026 00:22:50 +0000 (0:00:00.663) 0:00:13.419 ******* 2026-01-13 00:22:51.129683 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:22:51.129693 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:22:51.129704 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:22:51.129716 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:22:51.129730 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:22:51.129742 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:22:51.129753 | orchestrator | 2026-01-13 00:22:51.129766 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:22:51.129780 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-13 00:22:51.129794 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-13 00:22:51.129830 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-13 00:22:51.129852 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-13 00:22:51.129870 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-13 00:22:51.129888 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-13 00:22:51.129905 | orchestrator | 2026-01-13 00:22:51.129921 | orchestrator | 2026-01-13 00:22:51.129937 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:22:51.129955 | orchestrator | Tuesday 13 January 2026 00:22:50 +0000 (0:00:00.262) 0:00:13.682 ******* 2026-01-13 00:22:51.129974 | orchestrator | =============================================================================== 2026-01-13 00:22:51.129994 | orchestrator | Gathering Facts --------------------------------------------------------- 4.32s 2026-01-13 00:22:51.130012 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.22s 2026-01-13 00:22:51.130103 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.20s 2026-01-13 00:22:51.130123 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.17s 2026-01-13 00:22:51.130141 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2026-01-13 00:22:51.130160 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.75s 2026-01-13 00:22:51.130179 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2026-01-13 00:22:51.130209 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2026-01-13 00:22:51.130227 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2026-01-13 00:22:51.130295 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2026-01-13 00:22:51.130314 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2026-01-13 00:22:51.130333 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2026-01-13 00:22:51.130351 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2026-01-13 00:22:51.130370 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-01-13 00:22:51.130388 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-01-13 00:22:51.130406 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-01-13 00:22:51.130421 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-01-13 00:22:51.130432 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-01-13 00:22:51.130442 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2026-01-13 00:22:51.421107 | orchestrator | + osism apply --environment custom facts 2026-01-13 00:22:53.437909 | orchestrator | 2026-01-13 00:22:53 | INFO  | Trying to run play facts in environment custom 2026-01-13 00:23:03.622309 | orchestrator | 2026-01-13 00:23:03 | INFO  | Task 680fbef7-b83d-4dcb-b9b6-a0e0d6048bcb (facts) was prepared for execution. 2026-01-13 00:23:03.622424 | orchestrator | 2026-01-13 00:23:03 | INFO  | It takes a moment until task 680fbef7-b83d-4dcb-b9b6-a0e0d6048bcb (facts) has been started and output is visible here. 2026-01-13 00:23:46.572557 | orchestrator | 2026-01-13 00:23:46.572690 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-13 00:23:46.572717 | orchestrator | 2026-01-13 00:23:46.572735 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-13 00:23:46.572750 | orchestrator | Tuesday 13 January 2026 00:23:07 +0000 (0:00:00.073) 0:00:00.073 ******* 2026-01-13 00:23:46.572764 | orchestrator | ok: [testbed-manager] 2026-01-13 00:23:46.572780 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:23:46.572796 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:23:46.572812 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:23:46.572827 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:23:46.572843 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:23:46.572858 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:23:46.572875 | orchestrator | 2026-01-13 00:23:46.572891 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-13 00:23:46.572906 | orchestrator | Tuesday 13 January 2026 00:23:08 +0000 (0:00:01.419) 0:00:01.493 ******* 2026-01-13 00:23:46.572921 | orchestrator | ok: [testbed-manager] 2026-01-13 00:23:46.572936 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:23:46.572952 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:23:46.572967 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:23:46.572981 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:23:46.572996 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:23:46.573010 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:23:46.573025 | orchestrator | 2026-01-13 00:23:46.573042 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-13 00:23:46.573057 | orchestrator | 2026-01-13 00:23:46.573074 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-13 00:23:46.573091 | orchestrator | Tuesday 13 January 2026 00:23:09 +0000 (0:00:01.089) 0:00:02.583 ******* 2026-01-13 00:23:46.573106 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:23:46.573122 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:23:46.573138 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:23:46.573154 | orchestrator | 2026-01-13 00:23:46.573200 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-13 00:23:46.573267 | orchestrator | Tuesday 13 January 2026 00:23:10 +0000 (0:00:00.078) 0:00:02.661 ******* 2026-01-13 00:23:46.573288 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:23:46.573305 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:23:46.573323 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:23:46.573339 | orchestrator | 2026-01-13 00:23:46.573355 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-13 00:23:46.573373 | orchestrator | Tuesday 13 January 2026 00:23:10 +0000 (0:00:00.182) 0:00:02.844 ******* 2026-01-13 00:23:46.573390 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:23:46.573406 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:23:46.573422 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:23:46.573437 | orchestrator | 2026-01-13 00:23:46.573454 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-13 00:23:46.573470 | orchestrator | Tuesday 13 January 2026 00:23:10 +0000 (0:00:00.183) 0:00:03.027 ******* 2026-01-13 00:23:46.573487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:23:46.573504 | orchestrator | 2026-01-13 00:23:46.573520 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-13 00:23:46.573536 | orchestrator | Tuesday 13 January 2026 00:23:10 +0000 (0:00:00.127) 0:00:03.155 ******* 2026-01-13 00:23:46.573551 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:23:46.573568 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:23:46.573584 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:23:46.573601 | orchestrator | 2026-01-13 00:23:46.573616 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-13 00:23:46.573630 | orchestrator | Tuesday 13 January 2026 00:23:10 +0000 (0:00:00.424) 0:00:03.579 ******* 2026-01-13 00:23:46.573645 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:23:46.573660 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:23:46.573677 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:23:46.573693 | orchestrator | 2026-01-13 00:23:46.573710 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-13 00:23:46.573727 | orchestrator | Tuesday 13 January 2026 00:23:11 +0000 (0:00:00.112) 0:00:03.692 ******* 2026-01-13 00:23:46.573744 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:23:46.573759 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:23:46.573775 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:23:46.573791 | orchestrator | 2026-01-13 00:23:46.573807 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-13 00:23:46.573823 | orchestrator | Tuesday 13 January 2026 00:23:12 +0000 (0:00:00.997) 0:00:04.689 ******* 2026-01-13 00:23:46.573840 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:23:46.573856 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:23:46.573873 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:23:46.573889 | orchestrator | 2026-01-13 00:23:46.573906 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-13 00:23:46.573923 | orchestrator | Tuesday 13 January 2026 00:23:12 +0000 (0:00:00.464) 0:00:05.154 ******* 2026-01-13 00:23:46.573939 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:23:46.573956 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:23:46.573973 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:23:46.573989 | orchestrator | 2026-01-13 00:23:46.574005 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-13 00:23:46.574130 | orchestrator | Tuesday 13 January 2026 00:23:13 +0000 (0:00:01.046) 0:00:06.200 ******* 2026-01-13 00:23:46.574146 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:23:46.574156 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:23:46.574165 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:23:46.574175 | orchestrator | 2026-01-13 00:23:46.574184 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-13 00:23:46.574210 | orchestrator | Tuesday 13 January 2026 00:23:29 +0000 (0:00:15.507) 0:00:21.708 ******* 2026-01-13 00:23:46.574220 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:23:46.574230 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:23:46.574285 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:23:46.574295 | orchestrator | 2026-01-13 00:23:46.574304 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-13 00:23:46.574336 | orchestrator | Tuesday 13 January 2026 00:23:29 +0000 (0:00:00.100) 0:00:21.808 ******* 2026-01-13 00:23:46.574350 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:23:46.574367 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:23:46.574382 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:23:46.574398 | orchestrator | 2026-01-13 00:23:46.574412 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-13 00:23:46.574428 | orchestrator | Tuesday 13 January 2026 00:23:37 +0000 (0:00:08.343) 0:00:30.152 ******* 2026-01-13 00:23:46.574443 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:23:46.574457 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:23:46.574471 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:23:46.574485 | orchestrator | 2026-01-13 00:23:46.574501 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-13 00:23:46.574515 | orchestrator | Tuesday 13 January 2026 00:23:37 +0000 (0:00:00.458) 0:00:30.610 ******* 2026-01-13 00:23:46.574532 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-13 00:23:46.574549 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-13 00:23:46.574563 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-13 00:23:46.574577 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-13 00:23:46.574590 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-13 00:23:46.574602 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-13 00:23:46.574610 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-13 00:23:46.574618 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-13 00:23:46.574625 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-13 00:23:46.574633 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-13 00:23:46.574641 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-13 00:23:46.574649 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-13 00:23:46.574657 | orchestrator | 2026-01-13 00:23:46.574665 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-13 00:23:46.574673 | orchestrator | Tuesday 13 January 2026 00:23:41 +0000 (0:00:03.568) 0:00:34.179 ******* 2026-01-13 00:23:46.574680 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:23:46.574688 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:23:46.574696 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:23:46.574704 | orchestrator | 2026-01-13 00:23:46.574712 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-13 00:23:46.574719 | orchestrator | 2026-01-13 00:23:46.574727 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-13 00:23:46.574735 | orchestrator | Tuesday 13 January 2026 00:23:42 +0000 (0:00:01.266) 0:00:35.446 ******* 2026-01-13 00:23:46.574743 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:23:46.574751 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:23:46.574758 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:23:46.574766 | orchestrator | ok: [testbed-manager] 2026-01-13 00:23:46.574774 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:23:46.574782 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:23:46.574789 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:23:46.574797 | orchestrator | 2026-01-13 00:23:46.574805 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:23:46.574851 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:23:46.574870 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:23:46.574880 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:23:46.574888 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:23:46.574896 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:23:46.574904 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:23:46.574912 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:23:46.574920 | orchestrator | 2026-01-13 00:23:46.574928 | orchestrator | 2026-01-13 00:23:46.574936 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:23:46.574943 | orchestrator | Tuesday 13 January 2026 00:23:46 +0000 (0:00:03.759) 0:00:39.205 ******* 2026-01-13 00:23:46.574951 | orchestrator | =============================================================================== 2026-01-13 00:23:46.574959 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.51s 2026-01-13 00:23:46.574967 | orchestrator | Install required packages (Debian) -------------------------------------- 8.34s 2026-01-13 00:23:46.574975 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.76s 2026-01-13 00:23:46.574984 | orchestrator | Copy fact files --------------------------------------------------------- 3.57s 2026-01-13 00:23:46.574997 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2026-01-13 00:23:46.575011 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.27s 2026-01-13 00:23:46.575035 | orchestrator | Copy fact file ---------------------------------------------------------- 1.09s 2026-01-13 00:23:46.777344 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2026-01-13 00:23:46.777444 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.00s 2026-01-13 00:23:46.777459 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-01-13 00:23:46.777470 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-01-13 00:23:46.777481 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2026-01-13 00:23:46.777492 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2026-01-13 00:23:46.777502 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2026-01-13 00:23:46.777513 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-01-13 00:23:46.777524 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2026-01-13 00:23:46.777535 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-01-13 00:23:46.777545 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.08s 2026-01-13 00:23:47.058203 | orchestrator | + osism apply bootstrap 2026-01-13 00:23:59.238325 | orchestrator | 2026-01-13 00:23:59 | INFO  | Task b55d8260-af86-4169-9c84-5a2c3e1c880b (bootstrap) was prepared for execution. 2026-01-13 00:23:59.238446 | orchestrator | 2026-01-13 00:23:59 | INFO  | It takes a moment until task b55d8260-af86-4169-9c84-5a2c3e1c880b (bootstrap) has been started and output is visible here. 2026-01-13 00:24:16.130731 | orchestrator | 2026-01-13 00:24:16.130863 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-13 00:24:16.130903 | orchestrator | 2026-01-13 00:24:16.130917 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-13 00:24:16.130929 | orchestrator | Tuesday 13 January 2026 00:24:03 +0000 (0:00:00.114) 0:00:00.114 ******* 2026-01-13 00:24:16.130940 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:16.130952 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:24:16.130963 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:24:16.130974 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:24:16.130984 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:24:16.130995 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:24:16.131005 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:24:16.131016 | orchestrator | 2026-01-13 00:24:16.131027 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-13 00:24:16.131038 | orchestrator | 2026-01-13 00:24:16.131048 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-13 00:24:16.131059 | orchestrator | Tuesday 13 January 2026 00:24:03 +0000 (0:00:00.197) 0:00:00.312 ******* 2026-01-13 00:24:16.131070 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:24:16.131081 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:24:16.131091 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:24:16.131102 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:16.131113 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:24:16.131124 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:24:16.131135 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:24:16.131146 | orchestrator | 2026-01-13 00:24:16.131157 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-13 00:24:16.131167 | orchestrator | 2026-01-13 00:24:16.131178 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-13 00:24:16.131189 | orchestrator | Tuesday 13 January 2026 00:24:08 +0000 (0:00:04.697) 0:00:05.009 ******* 2026-01-13 00:24:16.131200 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-13 00:24:16.131212 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-13 00:24:16.131222 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-13 00:24:16.131261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-13 00:24:16.131282 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-13 00:24:16.131295 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-13 00:24:16.131308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:24:16.131321 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-13 00:24:16.131333 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-13 00:24:16.131345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:24:16.131356 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-13 00:24:16.131366 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-13 00:24:16.131377 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:24:16.131388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:24:16.131399 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-13 00:24:16.131409 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-13 00:24:16.131420 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-13 00:24:16.131430 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-13 00:24:16.131441 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-13 00:24:16.131451 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-13 00:24:16.131461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-13 00:24:16.131472 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-13 00:24:16.131483 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-13 00:24:16.131501 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-13 00:24:16.131512 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-13 00:24:16.131522 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-13 00:24:16.131533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-13 00:24:16.131543 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-13 00:24:16.131554 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-13 00:24:16.131564 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-13 00:24:16.131575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-13 00:24:16.131585 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-13 00:24:16.131595 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-13 00:24:16.131606 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:24:16.131617 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-13 00:24:16.131627 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-13 00:24:16.131638 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-13 00:24:16.131648 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:24:16.131659 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-13 00:24:16.131669 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-13 00:24:16.131679 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-13 00:24:16.131690 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-13 00:24:16.131700 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-13 00:24:16.131711 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-13 00:24:16.131721 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-13 00:24:16.131732 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:24:16.131761 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-13 00:24:16.131772 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-13 00:24:16.131783 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:24:16.131793 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-13 00:24:16.131804 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:24:16.131815 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-13 00:24:16.131825 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-13 00:24:16.131836 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-13 00:24:16.131847 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-13 00:24:16.131857 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:24:16.131868 | orchestrator | 2026-01-13 00:24:16.131879 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-13 00:24:16.131890 | orchestrator | 2026-01-13 00:24:16.131901 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-13 00:24:16.131911 | orchestrator | Tuesday 13 January 2026 00:24:08 +0000 (0:00:00.454) 0:00:05.464 ******* 2026-01-13 00:24:16.131922 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:16.131933 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:24:16.131943 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:24:16.131954 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:24:16.131965 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:24:16.131975 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:24:16.131986 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:24:16.131996 | orchestrator | 2026-01-13 00:24:16.132007 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-13 00:24:16.132018 | orchestrator | Tuesday 13 January 2026 00:24:10 +0000 (0:00:01.345) 0:00:06.809 ******* 2026-01-13 00:24:16.132029 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:16.132039 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:24:16.132057 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:24:16.132067 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:24:16.132078 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:24:16.132089 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:24:16.132099 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:24:16.132110 | orchestrator | 2026-01-13 00:24:16.132121 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-13 00:24:16.132132 | orchestrator | Tuesday 13 January 2026 00:24:11 +0000 (0:00:01.230) 0:00:08.039 ******* 2026-01-13 00:24:16.132144 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:24:16.132157 | orchestrator | 2026-01-13 00:24:16.132168 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-13 00:24:16.132179 | orchestrator | Tuesday 13 January 2026 00:24:11 +0000 (0:00:00.263) 0:00:08.303 ******* 2026-01-13 00:24:16.132190 | orchestrator | changed: [testbed-manager] 2026-01-13 00:24:16.132201 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:24:16.132212 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:24:16.132222 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:24:16.132256 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:24:16.132269 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:24:16.132280 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:24:16.132290 | orchestrator | 2026-01-13 00:24:16.132301 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-13 00:24:16.132312 | orchestrator | Tuesday 13 January 2026 00:24:13 +0000 (0:00:02.092) 0:00:10.395 ******* 2026-01-13 00:24:16.132323 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:24:16.132334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:24:16.132347 | orchestrator | 2026-01-13 00:24:16.132357 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-13 00:24:16.132368 | orchestrator | Tuesday 13 January 2026 00:24:13 +0000 (0:00:00.250) 0:00:10.646 ******* 2026-01-13 00:24:16.132383 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:24:16.132401 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:24:16.132420 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:24:16.132438 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:24:16.132456 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:24:16.132474 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:24:16.132492 | orchestrator | 2026-01-13 00:24:16.132509 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-13 00:24:16.132528 | orchestrator | Tuesday 13 January 2026 00:24:15 +0000 (0:00:01.028) 0:00:11.674 ******* 2026-01-13 00:24:16.132547 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:24:16.132565 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:24:16.132582 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:24:16.132611 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:24:16.132629 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:24:16.132647 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:24:16.132667 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:24:16.132686 | orchestrator | 2026-01-13 00:24:16.132705 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-13 00:24:16.132720 | orchestrator | Tuesday 13 January 2026 00:24:15 +0000 (0:00:00.575) 0:00:12.250 ******* 2026-01-13 00:24:16.132730 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:24:16.132741 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:24:16.132751 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:24:16.132762 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:24:16.132772 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:24:16.132783 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:24:16.132803 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:16.132813 | orchestrator | 2026-01-13 00:24:16.132824 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-13 00:24:16.132835 | orchestrator | Tuesday 13 January 2026 00:24:15 +0000 (0:00:00.424) 0:00:12.674 ******* 2026-01-13 00:24:16.132846 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:24:16.132857 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:24:16.132879 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:24:28.429124 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:24:28.429228 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:24:28.429270 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:24:28.429282 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:24:28.429293 | orchestrator | 2026-01-13 00:24:28.429306 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-13 00:24:28.429319 | orchestrator | Tuesday 13 January 2026 00:24:16 +0000 (0:00:00.225) 0:00:12.900 ******* 2026-01-13 00:24:28.429332 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:24:28.429354 | orchestrator | 2026-01-13 00:24:28.429366 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-13 00:24:28.429378 | orchestrator | Tuesday 13 January 2026 00:24:16 +0000 (0:00:00.281) 0:00:13.182 ******* 2026-01-13 00:24:28.429389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:24:28.429400 | orchestrator | 2026-01-13 00:24:28.429411 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-13 00:24:28.429421 | orchestrator | Tuesday 13 January 2026 00:24:16 +0000 (0:00:00.297) 0:00:13.479 ******* 2026-01-13 00:24:28.429432 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:28.429444 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:24:28.429455 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:24:28.429465 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:24:28.429476 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:24:28.429487 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:24:28.429497 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:24:28.429508 | orchestrator | 2026-01-13 00:24:28.429518 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-13 00:24:28.429529 | orchestrator | Tuesday 13 January 2026 00:24:18 +0000 (0:00:01.637) 0:00:15.117 ******* 2026-01-13 00:24:28.429540 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:24:28.429551 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:24:28.429562 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:24:28.429572 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:24:28.429584 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:24:28.429595 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:24:28.429606 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:24:28.429616 | orchestrator | 2026-01-13 00:24:28.429627 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-13 00:24:28.429638 | orchestrator | Tuesday 13 January 2026 00:24:18 +0000 (0:00:00.245) 0:00:15.363 ******* 2026-01-13 00:24:28.429649 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:28.429660 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:24:28.429671 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:24:28.429681 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:24:28.429692 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:24:28.429702 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:24:28.429713 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:24:28.429723 | orchestrator | 2026-01-13 00:24:28.429734 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-13 00:24:28.429765 | orchestrator | Tuesday 13 January 2026 00:24:19 +0000 (0:00:00.552) 0:00:15.915 ******* 2026-01-13 00:24:28.429777 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:24:28.429787 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:24:28.429798 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:24:28.429808 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:24:28.429819 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:24:28.429829 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:24:28.429839 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:24:28.429850 | orchestrator | 2026-01-13 00:24:28.429860 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-13 00:24:28.429872 | orchestrator | Tuesday 13 January 2026 00:24:19 +0000 (0:00:00.334) 0:00:16.250 ******* 2026-01-13 00:24:28.429882 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:28.429893 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:24:28.429903 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:24:28.429914 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:24:28.429924 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:24:28.429934 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:24:28.429945 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:24:28.429955 | orchestrator | 2026-01-13 00:24:28.429966 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-13 00:24:28.429976 | orchestrator | Tuesday 13 January 2026 00:24:20 +0000 (0:00:00.539) 0:00:16.791 ******* 2026-01-13 00:24:28.429987 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:28.429997 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:24:28.430008 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:24:28.430090 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:24:28.430103 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:24:28.430114 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:24:28.430125 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:24:28.430136 | orchestrator | 2026-01-13 00:24:28.430147 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-13 00:24:28.430158 | orchestrator | Tuesday 13 January 2026 00:24:21 +0000 (0:00:01.146) 0:00:17.937 ******* 2026-01-13 00:24:28.430169 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:28.430180 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:24:28.430190 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:24:28.430201 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:24:28.430212 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:24:28.430222 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:24:28.430264 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:24:28.430277 | orchestrator | 2026-01-13 00:24:28.430288 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-13 00:24:28.430299 | orchestrator | Tuesday 13 January 2026 00:24:22 +0000 (0:00:01.170) 0:00:19.108 ******* 2026-01-13 00:24:28.430335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:24:28.430348 | orchestrator | 2026-01-13 00:24:28.430359 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-13 00:24:28.430370 | orchestrator | Tuesday 13 January 2026 00:24:22 +0000 (0:00:00.319) 0:00:19.427 ******* 2026-01-13 00:24:28.430381 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:24:28.430393 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:24:28.430403 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:24:28.430414 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:24:28.430425 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:24:28.430436 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:24:28.430447 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:24:28.430458 | orchestrator | 2026-01-13 00:24:28.430469 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-13 00:24:28.430480 | orchestrator | Tuesday 13 January 2026 00:24:24 +0000 (0:00:01.272) 0:00:20.700 ******* 2026-01-13 00:24:28.430500 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:28.430512 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:24:28.430523 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:24:28.430534 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:24:28.430545 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:24:28.430555 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:24:28.430566 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:24:28.430577 | orchestrator | 2026-01-13 00:24:28.430588 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-13 00:24:28.430599 | orchestrator | Tuesday 13 January 2026 00:24:24 +0000 (0:00:00.228) 0:00:20.928 ******* 2026-01-13 00:24:28.430610 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:28.430622 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:24:28.430633 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:24:28.430643 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:24:28.430655 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:24:28.430665 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:24:28.430676 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:24:28.430687 | orchestrator | 2026-01-13 00:24:28.430699 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-13 00:24:28.430720 | orchestrator | Tuesday 13 January 2026 00:24:24 +0000 (0:00:00.233) 0:00:21.161 ******* 2026-01-13 00:24:28.430732 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:28.430743 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:24:28.430754 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:24:28.430765 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:24:28.430776 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:24:28.430787 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:24:28.430798 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:24:28.430809 | orchestrator | 2026-01-13 00:24:28.430820 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-13 00:24:28.430831 | orchestrator | Tuesday 13 January 2026 00:24:24 +0000 (0:00:00.228) 0:00:21.390 ******* 2026-01-13 00:24:28.430843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:24:28.430855 | orchestrator | 2026-01-13 00:24:28.430867 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-13 00:24:28.430878 | orchestrator | Tuesday 13 January 2026 00:24:25 +0000 (0:00:00.290) 0:00:21.681 ******* 2026-01-13 00:24:28.430888 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:28.430899 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:24:28.430910 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:24:28.430921 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:24:28.430932 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:24:28.430943 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:24:28.430954 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:24:28.430965 | orchestrator | 2026-01-13 00:24:28.430976 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-13 00:24:28.430987 | orchestrator | Tuesday 13 January 2026 00:24:25 +0000 (0:00:00.503) 0:00:22.185 ******* 2026-01-13 00:24:28.430998 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:24:28.431009 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:24:28.431020 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:24:28.431031 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:24:28.431042 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:24:28.431053 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:24:28.431064 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:24:28.431075 | orchestrator | 2026-01-13 00:24:28.431086 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-13 00:24:28.431097 | orchestrator | Tuesday 13 January 2026 00:24:25 +0000 (0:00:00.253) 0:00:22.438 ******* 2026-01-13 00:24:28.431108 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:24:28.431126 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:28.431137 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:24:28.431148 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:24:28.431159 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:24:28.431170 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:24:28.431181 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:24:28.431192 | orchestrator | 2026-01-13 00:24:28.431203 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-13 00:24:28.431214 | orchestrator | Tuesday 13 January 2026 00:24:26 +0000 (0:00:00.965) 0:00:23.403 ******* 2026-01-13 00:24:28.431225 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:24:28.431314 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:28.431328 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:24:28.431339 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:24:28.431350 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:24:28.431361 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:24:28.431373 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:24:28.431384 | orchestrator | 2026-01-13 00:24:28.431395 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-13 00:24:28.431406 | orchestrator | Tuesday 13 January 2026 00:24:27 +0000 (0:00:00.523) 0:00:23.927 ******* 2026-01-13 00:24:28.431416 | orchestrator | ok: [testbed-manager] 2026-01-13 00:24:28.431427 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:24:28.431437 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:24:28.431448 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:24:28.431467 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:25:09.254489 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:25:09.254598 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:25:09.254619 | orchestrator | 2026-01-13 00:25:09.254639 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-13 00:25:09.254657 | orchestrator | Tuesday 13 January 2026 00:24:28 +0000 (0:00:01.157) 0:00:25.084 ******* 2026-01-13 00:25:09.254674 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:25:09.254690 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:25:09.254706 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:25:09.254716 | orchestrator | changed: [testbed-manager] 2026-01-13 00:25:09.254725 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:25:09.254734 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:25:09.254743 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:25:09.254752 | orchestrator | 2026-01-13 00:25:09.254761 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-13 00:25:09.254770 | orchestrator | Tuesday 13 January 2026 00:24:44 +0000 (0:00:16.143) 0:00:41.228 ******* 2026-01-13 00:25:09.254779 | orchestrator | ok: [testbed-manager] 2026-01-13 00:25:09.254788 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:25:09.254796 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:25:09.254805 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:25:09.254814 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:25:09.254823 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:25:09.254832 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:25:09.254840 | orchestrator | 2026-01-13 00:25:09.254849 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-13 00:25:09.254858 | orchestrator | Tuesday 13 January 2026 00:24:44 +0000 (0:00:00.240) 0:00:41.468 ******* 2026-01-13 00:25:09.254866 | orchestrator | ok: [testbed-manager] 2026-01-13 00:25:09.254875 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:25:09.254884 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:25:09.254892 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:25:09.254901 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:25:09.254909 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:25:09.254918 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:25:09.254926 | orchestrator | 2026-01-13 00:25:09.254935 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-13 00:25:09.254944 | orchestrator | Tuesday 13 January 2026 00:24:45 +0000 (0:00:00.232) 0:00:41.701 ******* 2026-01-13 00:25:09.254952 | orchestrator | ok: [testbed-manager] 2026-01-13 00:25:09.254984 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:25:09.254993 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:25:09.255001 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:25:09.255011 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:25:09.255022 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:25:09.255032 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:25:09.255042 | orchestrator | 2026-01-13 00:25:09.255056 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-13 00:25:09.255073 | orchestrator | Tuesday 13 January 2026 00:24:45 +0000 (0:00:00.217) 0:00:41.919 ******* 2026-01-13 00:25:09.255091 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:25:09.255111 | orchestrator | 2026-01-13 00:25:09.255128 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-13 00:25:09.255139 | orchestrator | Tuesday 13 January 2026 00:24:45 +0000 (0:00:00.263) 0:00:42.182 ******* 2026-01-13 00:25:09.255149 | orchestrator | ok: [testbed-manager] 2026-01-13 00:25:09.255160 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:25:09.255170 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:25:09.255180 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:25:09.255190 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:25:09.255200 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:25:09.255210 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:25:09.255220 | orchestrator | 2026-01-13 00:25:09.255253 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-13 00:25:09.255266 | orchestrator | Tuesday 13 January 2026 00:24:47 +0000 (0:00:01.870) 0:00:44.053 ******* 2026-01-13 00:25:09.255294 | orchestrator | changed: [testbed-manager] 2026-01-13 00:25:09.255310 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:25:09.255330 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:25:09.255349 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:25:09.255367 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:25:09.255384 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:25:09.255401 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:25:09.255419 | orchestrator | 2026-01-13 00:25:09.255433 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-13 00:25:09.255442 | orchestrator | Tuesday 13 January 2026 00:24:48 +0000 (0:00:01.189) 0:00:45.243 ******* 2026-01-13 00:25:09.255455 | orchestrator | ok: [testbed-manager] 2026-01-13 00:25:09.255470 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:25:09.255485 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:25:09.255501 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:25:09.255517 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:25:09.255532 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:25:09.255544 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:25:09.255552 | orchestrator | 2026-01-13 00:25:09.255561 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-13 00:25:09.255570 | orchestrator | Tuesday 13 January 2026 00:24:49 +0000 (0:00:00.869) 0:00:46.112 ******* 2026-01-13 00:25:09.255580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:25:09.255590 | orchestrator | 2026-01-13 00:25:09.255599 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-13 00:25:09.255608 | orchestrator | Tuesday 13 January 2026 00:24:49 +0000 (0:00:00.286) 0:00:46.399 ******* 2026-01-13 00:25:09.255617 | orchestrator | changed: [testbed-manager] 2026-01-13 00:25:09.255625 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:25:09.255636 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:25:09.255651 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:25:09.255665 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:25:09.255691 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:25:09.255713 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:25:09.255729 | orchestrator | 2026-01-13 00:25:09.255758 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-13 00:25:09.255767 | orchestrator | Tuesday 13 January 2026 00:24:50 +0000 (0:00:01.106) 0:00:47.505 ******* 2026-01-13 00:25:09.255776 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:25:09.255784 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:25:09.255793 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:25:09.255801 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:25:09.255810 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:25:09.255818 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:25:09.255826 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:25:09.255835 | orchestrator | 2026-01-13 00:25:09.255843 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-13 00:25:09.255852 | orchestrator | Tuesday 13 January 2026 00:24:51 +0000 (0:00:00.240) 0:00:47.746 ******* 2026-01-13 00:25:09.255861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:25:09.255870 | orchestrator | 2026-01-13 00:25:09.255878 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-13 00:25:09.255887 | orchestrator | Tuesday 13 January 2026 00:24:51 +0000 (0:00:00.292) 0:00:48.039 ******* 2026-01-13 00:25:09.255896 | orchestrator | ok: [testbed-manager] 2026-01-13 00:25:09.255904 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:25:09.255912 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:25:09.255921 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:25:09.255929 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:25:09.255938 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:25:09.255946 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:25:09.255955 | orchestrator | 2026-01-13 00:25:09.255963 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-13 00:25:09.255972 | orchestrator | Tuesday 13 January 2026 00:24:53 +0000 (0:00:01.825) 0:00:49.865 ******* 2026-01-13 00:25:09.255980 | orchestrator | changed: [testbed-manager] 2026-01-13 00:25:09.255989 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:25:09.255997 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:25:09.256006 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:25:09.256014 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:25:09.256022 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:25:09.256031 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:25:09.256040 | orchestrator | 2026-01-13 00:25:09.256055 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-13 00:25:09.256071 | orchestrator | Tuesday 13 January 2026 00:24:54 +0000 (0:00:01.116) 0:00:50.981 ******* 2026-01-13 00:25:09.256088 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:25:09.256104 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:25:09.256120 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:25:09.256136 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:25:09.256152 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:25:09.256168 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:25:09.256183 | orchestrator | changed: [testbed-manager] 2026-01-13 00:25:09.256199 | orchestrator | 2026-01-13 00:25:09.256216 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-13 00:25:09.256256 | orchestrator | Tuesday 13 January 2026 00:25:06 +0000 (0:00:11.698) 0:01:02.679 ******* 2026-01-13 00:25:09.256273 | orchestrator | ok: [testbed-manager] 2026-01-13 00:25:09.256289 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:25:09.256304 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:25:09.256319 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:25:09.256334 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:25:09.256349 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:25:09.256376 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:25:09.256386 | orchestrator | 2026-01-13 00:25:09.256400 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-13 00:25:09.256415 | orchestrator | Tuesday 13 January 2026 00:25:07 +0000 (0:00:01.656) 0:01:04.335 ******* 2026-01-13 00:25:09.256430 | orchestrator | ok: [testbed-manager] 2026-01-13 00:25:09.256445 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:25:09.256460 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:25:09.256475 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:25:09.256486 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:25:09.256494 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:25:09.256502 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:25:09.256511 | orchestrator | 2026-01-13 00:25:09.256519 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-13 00:25:09.256528 | orchestrator | Tuesday 13 January 2026 00:25:08 +0000 (0:00:00.887) 0:01:05.223 ******* 2026-01-13 00:25:09.256537 | orchestrator | ok: [testbed-manager] 2026-01-13 00:25:09.256545 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:25:09.256553 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:25:09.256565 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:25:09.256580 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:25:09.256596 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:25:09.256611 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:25:09.256620 | orchestrator | 2026-01-13 00:25:09.256628 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-13 00:25:09.256637 | orchestrator | Tuesday 13 January 2026 00:25:08 +0000 (0:00:00.210) 0:01:05.434 ******* 2026-01-13 00:25:09.256646 | orchestrator | ok: [testbed-manager] 2026-01-13 00:25:09.256654 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:25:09.256663 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:25:09.256671 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:25:09.256687 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:25:09.256702 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:25:09.256716 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:25:09.256730 | orchestrator | 2026-01-13 00:25:09.256745 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-13 00:25:09.256761 | orchestrator | Tuesday 13 January 2026 00:25:08 +0000 (0:00:00.217) 0:01:05.651 ******* 2026-01-13 00:25:09.256778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:25:09.256795 | orchestrator | 2026-01-13 00:25:09.256831 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-13 00:27:26.002967 | orchestrator | Tuesday 13 January 2026 00:25:09 +0000 (0:00:00.266) 0:01:05.917 ******* 2026-01-13 00:27:26.003103 | orchestrator | ok: [testbed-manager] 2026-01-13 00:27:26.003121 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:27:26.003133 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:27:26.003144 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:27:26.003155 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:27:26.003166 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:27:26.003177 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:27:26.003188 | orchestrator | 2026-01-13 00:27:26.003201 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-13 00:27:26.003212 | orchestrator | Tuesday 13 January 2026 00:25:10 +0000 (0:00:01.667) 0:01:07.585 ******* 2026-01-13 00:27:26.003223 | orchestrator | changed: [testbed-manager] 2026-01-13 00:27:26.003284 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:27:26.003297 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:27:26.003308 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:27:26.003319 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:27:26.003330 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:27:26.003341 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:27:26.003351 | orchestrator | 2026-01-13 00:27:26.003362 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-13 00:27:26.003399 | orchestrator | Tuesday 13 January 2026 00:25:11 +0000 (0:00:00.587) 0:01:08.173 ******* 2026-01-13 00:27:26.003410 | orchestrator | ok: [testbed-manager] 2026-01-13 00:27:26.003421 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:27:26.003432 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:27:26.003442 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:27:26.003452 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:27:26.003463 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:27:26.003473 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:27:26.003484 | orchestrator | 2026-01-13 00:27:26.003494 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-13 00:27:26.003515 | orchestrator | Tuesday 13 January 2026 00:25:11 +0000 (0:00:00.234) 0:01:08.408 ******* 2026-01-13 00:27:26.003540 | orchestrator | ok: [testbed-manager] 2026-01-13 00:27:26.003566 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:27:26.003586 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:27:26.003605 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:27:26.003623 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:27:26.003642 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:27:26.003661 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:27:26.003680 | orchestrator | 2026-01-13 00:27:26.003702 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-13 00:27:26.003723 | orchestrator | Tuesday 13 January 2026 00:25:13 +0000 (0:00:01.314) 0:01:09.722 ******* 2026-01-13 00:27:26.003745 | orchestrator | changed: [testbed-manager] 2026-01-13 00:27:26.003765 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:27:26.003783 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:27:26.003794 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:27:26.003805 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:27:26.003815 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:27:26.003826 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:27:26.003837 | orchestrator | 2026-01-13 00:27:26.003847 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-13 00:27:26.003858 | orchestrator | Tuesday 13 January 2026 00:25:14 +0000 (0:00:01.735) 0:01:11.458 ******* 2026-01-13 00:27:26.003869 | orchestrator | ok: [testbed-manager] 2026-01-13 00:27:26.003879 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:27:26.003890 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:27:26.003900 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:27:26.003911 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:27:26.003921 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:27:26.003932 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:27:26.003942 | orchestrator | 2026-01-13 00:27:26.003954 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-13 00:27:26.003964 | orchestrator | Tuesday 13 January 2026 00:25:17 +0000 (0:00:02.752) 0:01:14.210 ******* 2026-01-13 00:27:26.003975 | orchestrator | ok: [testbed-manager] 2026-01-13 00:27:26.003986 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:27:26.003996 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:27:26.004007 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:27:26.004017 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:27:26.004028 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:27:26.004038 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:27:26.004048 | orchestrator | 2026-01-13 00:27:26.004059 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-13 00:27:26.004070 | orchestrator | Tuesday 13 January 2026 00:25:55 +0000 (0:00:38.022) 0:01:52.233 ******* 2026-01-13 00:27:26.004081 | orchestrator | changed: [testbed-manager] 2026-01-13 00:27:26.004091 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:27:26.004102 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:27:26.004113 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:27:26.004123 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:27:26.004134 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:27:26.004144 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:27:26.004155 | orchestrator | 2026-01-13 00:27:26.004177 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-13 00:27:26.004188 | orchestrator | Tuesday 13 January 2026 00:27:12 +0000 (0:01:17.350) 0:03:09.583 ******* 2026-01-13 00:27:26.004199 | orchestrator | ok: [testbed-manager] 2026-01-13 00:27:26.004210 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:27:26.004220 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:27:26.004231 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:27:26.004270 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:27:26.004281 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:27:26.004292 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:27:26.004303 | orchestrator | 2026-01-13 00:27:26.004313 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-13 00:27:26.004324 | orchestrator | Tuesday 13 January 2026 00:27:14 +0000 (0:00:01.891) 0:03:11.475 ******* 2026-01-13 00:27:26.004336 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:27:26.004346 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:27:26.004357 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:27:26.004368 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:27:26.004378 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:27:26.004389 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:27:26.004400 | orchestrator | changed: [testbed-manager] 2026-01-13 00:27:26.004410 | orchestrator | 2026-01-13 00:27:26.004421 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-13 00:27:26.004433 | orchestrator | Tuesday 13 January 2026 00:27:24 +0000 (0:00:10.154) 0:03:21.630 ******* 2026-01-13 00:27:26.004480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-13 00:27:26.004504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-13 00:27:26.004519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-13 00:27:26.004531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-13 00:27:26.004543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-13 00:27:26.004554 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-13 00:27:26.004573 | orchestrator | 2026-01-13 00:27:26.004584 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-13 00:27:26.004595 | orchestrator | Tuesday 13 January 2026 00:27:25 +0000 (0:00:00.362) 0:03:21.992 ******* 2026-01-13 00:27:26.004606 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-13 00:27:26.004621 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:27:26.004632 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-13 00:27:26.004643 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:27:26.004654 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-13 00:27:26.004665 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:27:26.004676 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-13 00:27:26.004686 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:27:26.004697 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-13 00:27:26.004708 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-13 00:27:26.004719 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-13 00:27:26.004729 | orchestrator | 2026-01-13 00:27:26.004740 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-13 00:27:26.004761 | orchestrator | Tuesday 13 January 2026 00:27:25 +0000 (0:00:00.608) 0:03:22.601 ******* 2026-01-13 00:27:26.004773 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-13 00:27:26.004785 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-13 00:27:26.004795 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-13 00:27:26.004806 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-13 00:27:26.004817 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-13 00:27:26.004839 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-13 00:27:35.140457 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-13 00:27:35.140564 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-13 00:27:35.140578 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-13 00:27:35.140590 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-13 00:27:35.140601 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-13 00:27:35.140611 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-13 00:27:35.140621 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:27:35.140631 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-13 00:27:35.140641 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-13 00:27:35.140651 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-13 00:27:35.140660 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-13 00:27:35.140670 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-13 00:27:35.140680 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-13 00:27:35.140714 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-13 00:27:35.140724 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-13 00:27:35.140734 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-13 00:27:35.140744 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-13 00:27:35.140753 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-13 00:27:35.140763 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-13 00:27:35.140923 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-13 00:27:35.140934 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-13 00:27:35.140945 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:27:35.140956 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-13 00:27:35.140973 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-13 00:27:35.140988 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-13 00:27:35.141005 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-13 00:27:35.141022 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-13 00:27:35.141039 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:27:35.141058 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-13 00:27:35.141075 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-13 00:27:35.141129 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-13 00:27:35.141142 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-13 00:27:35.141153 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-13 00:27:35.141164 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-13 00:27:35.141175 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-13 00:27:35.141186 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-13 00:27:35.141197 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-13 00:27:35.141208 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:27:35.141219 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-13 00:27:35.141229 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-13 00:27:35.141286 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-13 00:27:35.141298 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-13 00:27:35.141325 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-13 00:27:35.141356 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-13 00:27:35.141366 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-13 00:27:35.141377 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-13 00:27:35.141388 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-13 00:27:35.141409 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-13 00:27:35.141419 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-13 00:27:35.141429 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-13 00:27:35.141439 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-13 00:27:35.141449 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-13 00:27:35.141463 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-13 00:27:35.141480 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-13 00:27:35.141497 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-13 00:27:35.141516 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-13 00:27:35.141534 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-13 00:27:35.141551 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-13 00:27:35.141561 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-13 00:27:35.141570 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-13 00:27:35.141580 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-13 00:27:35.141589 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-13 00:27:35.141599 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-13 00:27:35.141608 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-13 00:27:35.141617 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-13 00:27:35.141626 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-13 00:27:35.141636 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-13 00:27:35.141645 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-13 00:27:35.141654 | orchestrator | 2026-01-13 00:27:35.141670 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-13 00:27:35.141687 | orchestrator | Tuesday 13 January 2026 00:27:33 +0000 (0:00:07.949) 0:03:30.550 ******* 2026-01-13 00:27:35.141703 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-13 00:27:35.141719 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-13 00:27:35.141736 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-13 00:27:35.141754 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-13 00:27:35.141770 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-13 00:27:35.141786 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-13 00:27:35.141803 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-13 00:27:35.141819 | orchestrator | 2026-01-13 00:27:35.141833 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-13 00:27:35.141842 | orchestrator | Tuesday 13 January 2026 00:27:34 +0000 (0:00:00.637) 0:03:31.188 ******* 2026-01-13 00:27:35.141852 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-13 00:27:35.141872 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:27:35.141882 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-13 00:27:35.141891 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:27:35.141901 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-13 00:27:35.141910 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:27:35.141920 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-13 00:27:35.141929 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:27:35.141939 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-13 00:27:35.141954 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-13 00:27:35.141973 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-13 00:27:49.622793 | orchestrator | 2026-01-13 00:27:49.622913 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-13 00:27:49.622939 | orchestrator | Tuesday 13 January 2026 00:27:35 +0000 (0:00:00.617) 0:03:31.806 ******* 2026-01-13 00:27:49.622960 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-13 00:27:49.622982 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:27:49.623005 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-13 00:27:49.623017 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:27:49.623028 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-13 00:27:49.623039 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:27:49.623051 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-13 00:27:49.623063 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:27:49.623082 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-13 00:27:49.623099 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-13 00:27:49.623117 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-13 00:27:49.623134 | orchestrator | 2026-01-13 00:27:49.623153 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-13 00:27:49.623171 | orchestrator | Tuesday 13 January 2026 00:27:36 +0000 (0:00:01.604) 0:03:33.410 ******* 2026-01-13 00:27:49.623191 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-13 00:27:49.623210 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:27:49.623231 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-13 00:27:49.623305 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:27:49.623317 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-13 00:27:49.623328 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:27:49.623339 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-13 00:27:49.623376 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:27:49.623400 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-13 00:27:49.623411 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-13 00:27:49.623422 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-13 00:27:49.623462 | orchestrator | 2026-01-13 00:27:49.623480 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-13 00:27:49.623497 | orchestrator | Tuesday 13 January 2026 00:27:37 +0000 (0:00:00.602) 0:03:34.012 ******* 2026-01-13 00:27:49.623516 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:27:49.623536 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:27:49.623555 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:27:49.623581 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:27:49.623592 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:27:49.623603 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:27:49.623614 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:27:49.623625 | orchestrator | 2026-01-13 00:27:49.623636 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-13 00:27:49.623647 | orchestrator | Tuesday 13 January 2026 00:27:37 +0000 (0:00:00.327) 0:03:34.340 ******* 2026-01-13 00:27:49.623658 | orchestrator | ok: [testbed-manager] 2026-01-13 00:27:49.623673 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:27:49.623692 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:27:49.623741 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:27:49.623793 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:27:49.623999 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:27:49.624023 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:27:49.624033 | orchestrator | 2026-01-13 00:27:49.624044 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-13 00:27:49.624055 | orchestrator | Tuesday 13 January 2026 00:27:43 +0000 (0:00:05.539) 0:03:39.879 ******* 2026-01-13 00:27:49.624066 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-13 00:27:49.624077 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:27:49.624088 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-13 00:27:49.624099 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-13 00:27:49.624109 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:27:49.624120 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-13 00:27:49.624131 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:27:49.624141 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-13 00:27:49.624152 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:27:49.624162 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:27:49.624173 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-13 00:27:49.624184 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:27:49.624194 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-13 00:27:49.624205 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:27:49.624216 | orchestrator | 2026-01-13 00:27:49.624226 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-13 00:27:49.624262 | orchestrator | Tuesday 13 January 2026 00:27:43 +0000 (0:00:00.328) 0:03:40.208 ******* 2026-01-13 00:27:49.624274 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-13 00:27:49.624285 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-13 00:27:49.624296 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-13 00:27:49.624329 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-13 00:27:49.624341 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-13 00:27:49.624351 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-13 00:27:49.624362 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-13 00:27:49.624372 | orchestrator | 2026-01-13 00:27:49.624383 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-13 00:27:49.624394 | orchestrator | Tuesday 13 January 2026 00:27:44 +0000 (0:00:01.292) 0:03:41.501 ******* 2026-01-13 00:27:49.624407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:27:49.624421 | orchestrator | 2026-01-13 00:27:49.624432 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-13 00:27:49.624457 | orchestrator | Tuesday 13 January 2026 00:27:45 +0000 (0:00:00.517) 0:03:42.018 ******* 2026-01-13 00:27:49.624468 | orchestrator | ok: [testbed-manager] 2026-01-13 00:27:49.624479 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:27:49.624489 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:27:49.624507 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:27:49.624524 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:27:49.624567 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:27:49.624587 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:27:49.624606 | orchestrator | 2026-01-13 00:27:49.624650 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-13 00:27:49.624661 | orchestrator | Tuesday 13 January 2026 00:27:46 +0000 (0:00:01.265) 0:03:43.284 ******* 2026-01-13 00:27:49.624672 | orchestrator | ok: [testbed-manager] 2026-01-13 00:27:49.624683 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:27:49.624694 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:27:49.624710 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:27:49.624727 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:27:49.624746 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:27:49.624762 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:27:49.624772 | orchestrator | 2026-01-13 00:27:49.624798 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-13 00:27:49.624810 | orchestrator | Tuesday 13 January 2026 00:27:47 +0000 (0:00:00.673) 0:03:43.957 ******* 2026-01-13 00:27:49.624820 | orchestrator | changed: [testbed-manager] 2026-01-13 00:27:49.624831 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:27:49.624842 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:27:49.624852 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:27:49.624888 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:27:49.624908 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:27:49.624927 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:27:49.624945 | orchestrator | 2026-01-13 00:27:49.624989 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-13 00:27:49.625009 | orchestrator | Tuesday 13 January 2026 00:27:47 +0000 (0:00:00.636) 0:03:44.594 ******* 2026-01-13 00:27:49.625020 | orchestrator | ok: [testbed-manager] 2026-01-13 00:27:49.625031 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:27:49.625041 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:27:49.625052 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:27:49.625063 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:27:49.625074 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:27:49.625084 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:27:49.625095 | orchestrator | 2026-01-13 00:27:49.625106 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-13 00:27:49.625117 | orchestrator | Tuesday 13 January 2026 00:27:48 +0000 (0:00:00.666) 0:03:45.261 ******* 2026-01-13 00:27:49.625132 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768262699.1741004, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 00:27:49.625147 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768262720.6874278, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 00:27:49.625178 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768262712.4412637, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 00:27:49.625216 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768262727.7672517, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 00:27:54.712056 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768262721.3498943, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 00:27:54.712149 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768262703.8333607, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 00:27:54.712160 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768262705.3500478, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 00:27:54.712168 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 00:27:54.712175 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 00:27:54.712204 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 00:27:54.712225 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 00:27:54.712280 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 00:27:54.712289 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 00:27:54.712296 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 00:27:54.712305 | orchestrator | 2026-01-13 00:27:54.712320 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-13 00:27:54.712333 | orchestrator | Tuesday 13 January 2026 00:27:49 +0000 (0:00:01.020) 0:03:46.281 ******* 2026-01-13 00:27:54.712345 | orchestrator | changed: [testbed-manager] 2026-01-13 00:27:54.712358 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:27:54.712369 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:27:54.712380 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:27:54.712392 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:27:54.712404 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:27:54.712415 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:27:54.712427 | orchestrator | 2026-01-13 00:27:54.712438 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-13 00:27:54.712450 | orchestrator | Tuesday 13 January 2026 00:27:50 +0000 (0:00:01.256) 0:03:47.538 ******* 2026-01-13 00:27:54.712462 | orchestrator | changed: [testbed-manager] 2026-01-13 00:27:54.712474 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:27:54.712487 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:27:54.712510 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:27:54.712523 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:27:54.712537 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:27:54.712550 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:27:54.712563 | orchestrator | 2026-01-13 00:27:54.712576 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-13 00:27:54.712589 | orchestrator | Tuesday 13 January 2026 00:27:52 +0000 (0:00:01.155) 0:03:48.693 ******* 2026-01-13 00:27:54.712602 | orchestrator | changed: [testbed-manager] 2026-01-13 00:27:54.712615 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:27:54.712626 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:27:54.712638 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:27:54.712650 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:27:54.712663 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:27:54.712676 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:27:54.712688 | orchestrator | 2026-01-13 00:27:54.712701 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-13 00:27:54.712715 | orchestrator | Tuesday 13 January 2026 00:27:53 +0000 (0:00:01.244) 0:03:49.938 ******* 2026-01-13 00:27:54.712727 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:27:54.712740 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:27:54.712753 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:27:54.712766 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:27:54.712778 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:27:54.712791 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:27:54.712804 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:27:54.712816 | orchestrator | 2026-01-13 00:27:54.712829 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-13 00:27:54.712842 | orchestrator | Tuesday 13 January 2026 00:27:53 +0000 (0:00:00.290) 0:03:50.228 ******* 2026-01-13 00:27:54.712854 | orchestrator | ok: [testbed-manager] 2026-01-13 00:27:54.712868 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:27:54.712886 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:27:54.712899 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:27:54.712911 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:27:54.712924 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:27:54.712937 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:27:54.712950 | orchestrator | 2026-01-13 00:27:54.712963 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-13 00:27:54.712975 | orchestrator | Tuesday 13 January 2026 00:27:54 +0000 (0:00:00.728) 0:03:50.957 ******* 2026-01-13 00:27:54.712990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:27:54.713005 | orchestrator | 2026-01-13 00:27:54.713018 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-13 00:27:54.713041 | orchestrator | Tuesday 13 January 2026 00:27:54 +0000 (0:00:00.419) 0:03:51.377 ******* 2026-01-13 00:29:13.707546 | orchestrator | ok: [testbed-manager] 2026-01-13 00:29:13.707626 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:29:13.707633 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:29:13.707638 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:29:13.707642 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:29:13.707646 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:29:13.707650 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:29:13.707654 | orchestrator | 2026-01-13 00:29:13.707659 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-13 00:29:13.707664 | orchestrator | Tuesday 13 January 2026 00:28:03 +0000 (0:00:08.546) 0:03:59.923 ******* 2026-01-13 00:29:13.707668 | orchestrator | ok: [testbed-manager] 2026-01-13 00:29:13.707672 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:29:13.707676 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:29:13.707680 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:29:13.707698 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:29:13.707702 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:29:13.707706 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:29:13.707709 | orchestrator | 2026-01-13 00:29:13.707713 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-13 00:29:13.707717 | orchestrator | Tuesday 13 January 2026 00:28:04 +0000 (0:00:01.511) 0:04:01.434 ******* 2026-01-13 00:29:13.707721 | orchestrator | ok: [testbed-manager] 2026-01-13 00:29:13.707725 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:29:13.707729 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:29:13.707732 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:29:13.707736 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:29:13.707740 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:29:13.707744 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:29:13.707747 | orchestrator | 2026-01-13 00:29:13.707751 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-13 00:29:13.707755 | orchestrator | Tuesday 13 January 2026 00:28:05 +0000 (0:00:01.231) 0:04:02.665 ******* 2026-01-13 00:29:13.707759 | orchestrator | ok: [testbed-manager] 2026-01-13 00:29:13.707763 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:29:13.707766 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:29:13.707770 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:29:13.707774 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:29:13.707778 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:29:13.707781 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:29:13.707785 | orchestrator | 2026-01-13 00:29:13.707789 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-13 00:29:13.707794 | orchestrator | Tuesday 13 January 2026 00:28:06 +0000 (0:00:00.324) 0:04:02.990 ******* 2026-01-13 00:29:13.707797 | orchestrator | ok: [testbed-manager] 2026-01-13 00:29:13.707801 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:29:13.707805 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:29:13.707809 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:29:13.707812 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:29:13.707816 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:29:13.707820 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:29:13.707824 | orchestrator | 2026-01-13 00:29:13.707827 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-13 00:29:13.707831 | orchestrator | Tuesday 13 January 2026 00:28:06 +0000 (0:00:00.347) 0:04:03.338 ******* 2026-01-13 00:29:13.707835 | orchestrator | ok: [testbed-manager] 2026-01-13 00:29:13.707839 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:29:13.707843 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:29:13.707846 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:29:13.707850 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:29:13.707854 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:29:13.707857 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:29:13.707861 | orchestrator | 2026-01-13 00:29:13.707865 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-13 00:29:13.707869 | orchestrator | Tuesday 13 January 2026 00:28:06 +0000 (0:00:00.318) 0:04:03.657 ******* 2026-01-13 00:29:13.707872 | orchestrator | ok: [testbed-manager] 2026-01-13 00:29:13.707876 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:29:13.707880 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:29:13.707884 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:29:13.707887 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:29:13.707891 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:29:13.707895 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:29:13.707899 | orchestrator | 2026-01-13 00:29:13.707903 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-13 00:29:13.707906 | orchestrator | Tuesday 13 January 2026 00:28:11 +0000 (0:00:04.886) 0:04:08.543 ******* 2026-01-13 00:29:13.707912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:29:13.707922 | orchestrator | 2026-01-13 00:29:13.707926 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-13 00:29:13.707930 | orchestrator | Tuesday 13 January 2026 00:28:12 +0000 (0:00:00.464) 0:04:09.008 ******* 2026-01-13 00:29:13.707934 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-13 00:29:13.707937 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-13 00:29:13.707942 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-13 00:29:13.707946 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-13 00:29:13.707950 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:29:13.707954 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-13 00:29:13.707958 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-13 00:29:13.707961 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:29:13.707965 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-13 00:29:13.707969 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-13 00:29:13.707973 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:29:13.707976 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-13 00:29:13.707980 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-13 00:29:13.707984 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:29:13.707988 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-13 00:29:13.707992 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:29:13.708006 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-13 00:29:13.708010 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:29:13.708014 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-13 00:29:13.708018 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-13 00:29:13.708022 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:29:13.708025 | orchestrator | 2026-01-13 00:29:13.708029 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-13 00:29:13.708033 | orchestrator | Tuesday 13 January 2026 00:28:12 +0000 (0:00:00.357) 0:04:09.365 ******* 2026-01-13 00:29:13.708037 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:29:13.708041 | orchestrator | 2026-01-13 00:29:13.708045 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-13 00:29:13.708049 | orchestrator | Tuesday 13 January 2026 00:28:13 +0000 (0:00:00.407) 0:04:09.773 ******* 2026-01-13 00:29:13.708053 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-13 00:29:13.708056 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:29:13.708060 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-13 00:29:13.708064 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-13 00:29:13.708068 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:29:13.708072 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-13 00:29:13.708076 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:29:13.708080 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-13 00:29:13.708083 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:29:13.708087 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-13 00:29:13.708091 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:29:13.708095 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:29:13.708099 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-13 00:29:13.708102 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:29:13.708106 | orchestrator | 2026-01-13 00:29:13.708121 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-13 00:29:13.708129 | orchestrator | Tuesday 13 January 2026 00:28:13 +0000 (0:00:00.331) 0:04:10.105 ******* 2026-01-13 00:29:13.708133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:29:13.708137 | orchestrator | 2026-01-13 00:29:13.708141 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-13 00:29:13.708144 | orchestrator | Tuesday 13 January 2026 00:28:13 +0000 (0:00:00.401) 0:04:10.507 ******* 2026-01-13 00:29:13.708148 | orchestrator | changed: [testbed-manager] 2026-01-13 00:29:13.708152 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:29:13.708156 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:29:13.708160 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:29:13.708163 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:29:13.708167 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:29:13.708171 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:29:13.708175 | orchestrator | 2026-01-13 00:29:13.708178 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-13 00:29:13.708182 | orchestrator | Tuesday 13 January 2026 00:28:48 +0000 (0:00:34.302) 0:04:44.809 ******* 2026-01-13 00:29:13.708186 | orchestrator | changed: [testbed-manager] 2026-01-13 00:29:13.708190 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:29:13.708193 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:29:13.708197 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:29:13.708201 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:29:13.708205 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:29:13.708209 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:29:13.708212 | orchestrator | 2026-01-13 00:29:13.708216 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-13 00:29:13.708220 | orchestrator | Tuesday 13 January 2026 00:28:56 +0000 (0:00:08.842) 0:04:53.652 ******* 2026-01-13 00:29:13.708224 | orchestrator | changed: [testbed-manager] 2026-01-13 00:29:13.708227 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:29:13.708231 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:29:13.708235 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:29:13.708239 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:29:13.708242 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:29:13.708276 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:29:13.708280 | orchestrator | 2026-01-13 00:29:13.708283 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-13 00:29:13.708290 | orchestrator | Tuesday 13 January 2026 00:29:05 +0000 (0:00:08.171) 0:05:01.823 ******* 2026-01-13 00:29:13.708293 | orchestrator | ok: [testbed-manager] 2026-01-13 00:29:13.708297 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:29:13.708301 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:29:13.708305 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:29:13.708308 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:29:13.708312 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:29:13.708316 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:29:13.708319 | orchestrator | 2026-01-13 00:29:13.708323 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-13 00:29:13.708327 | orchestrator | Tuesday 13 January 2026 00:29:07 +0000 (0:00:01.923) 0:05:03.747 ******* 2026-01-13 00:29:13.708331 | orchestrator | changed: [testbed-manager] 2026-01-13 00:29:13.708334 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:29:13.708338 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:29:13.708342 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:29:13.708345 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:29:13.708349 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:29:13.708353 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:29:13.708357 | orchestrator | 2026-01-13 00:29:13.708363 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-13 00:29:25.048375 | orchestrator | Tuesday 13 January 2026 00:29:13 +0000 (0:00:06.615) 0:05:10.362 ******* 2026-01-13 00:29:25.048480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:29:25.048495 | orchestrator | 2026-01-13 00:29:25.048502 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-13 00:29:25.048507 | orchestrator | Tuesday 13 January 2026 00:29:14 +0000 (0:00:00.620) 0:05:10.983 ******* 2026-01-13 00:29:25.048511 | orchestrator | changed: [testbed-manager] 2026-01-13 00:29:25.048516 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:29:25.048520 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:29:25.048524 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:29:25.048528 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:29:25.048532 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:29:25.048536 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:29:25.048539 | orchestrator | 2026-01-13 00:29:25.048543 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-13 00:29:25.048548 | orchestrator | Tuesday 13 January 2026 00:29:15 +0000 (0:00:00.822) 0:05:11.805 ******* 2026-01-13 00:29:25.048551 | orchestrator | ok: [testbed-manager] 2026-01-13 00:29:25.048557 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:29:25.048560 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:29:25.048564 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:29:25.048568 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:29:25.048572 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:29:25.048575 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:29:25.048579 | orchestrator | 2026-01-13 00:29:25.048583 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-13 00:29:25.048587 | orchestrator | Tuesday 13 January 2026 00:29:16 +0000 (0:00:01.768) 0:05:13.574 ******* 2026-01-13 00:29:25.048591 | orchestrator | changed: [testbed-manager] 2026-01-13 00:29:25.048594 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:29:25.048598 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:29:25.048602 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:29:25.048606 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:29:25.048609 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:29:25.048613 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:29:25.048617 | orchestrator | 2026-01-13 00:29:25.048621 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-13 00:29:25.048625 | orchestrator | Tuesday 13 January 2026 00:29:17 +0000 (0:00:00.810) 0:05:14.385 ******* 2026-01-13 00:29:25.048628 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:29:25.048632 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:29:25.048636 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:29:25.048640 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:29:25.048643 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:29:25.048647 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:29:25.048651 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:29:25.048655 | orchestrator | 2026-01-13 00:29:25.048658 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-13 00:29:25.048662 | orchestrator | Tuesday 13 January 2026 00:29:18 +0000 (0:00:00.295) 0:05:14.680 ******* 2026-01-13 00:29:25.048666 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:29:25.048670 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:29:25.048673 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:29:25.048677 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:29:25.048681 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:29:25.048685 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:29:25.048688 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:29:25.048692 | orchestrator | 2026-01-13 00:29:25.048696 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-13 00:29:25.048718 | orchestrator | Tuesday 13 January 2026 00:29:18 +0000 (0:00:00.401) 0:05:15.081 ******* 2026-01-13 00:29:25.048722 | orchestrator | ok: [testbed-manager] 2026-01-13 00:29:25.048726 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:29:25.048730 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:29:25.048733 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:29:25.048737 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:29:25.048741 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:29:25.048745 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:29:25.048748 | orchestrator | 2026-01-13 00:29:25.048752 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-13 00:29:25.048756 | orchestrator | Tuesday 13 January 2026 00:29:18 +0000 (0:00:00.280) 0:05:15.362 ******* 2026-01-13 00:29:25.048760 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:29:25.048764 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:29:25.048767 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:29:25.048771 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:29:25.048775 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:29:25.048779 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:29:25.048782 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:29:25.048786 | orchestrator | 2026-01-13 00:29:25.048790 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-13 00:29:25.048804 | orchestrator | Tuesday 13 January 2026 00:29:18 +0000 (0:00:00.269) 0:05:15.631 ******* 2026-01-13 00:29:25.048809 | orchestrator | ok: [testbed-manager] 2026-01-13 00:29:25.048812 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:29:25.048816 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:29:25.048820 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:29:25.048824 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:29:25.048827 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:29:25.048831 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:29:25.048835 | orchestrator | 2026-01-13 00:29:25.048838 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-13 00:29:25.048842 | orchestrator | Tuesday 13 January 2026 00:29:19 +0000 (0:00:00.280) 0:05:15.912 ******* 2026-01-13 00:29:25.048846 | orchestrator | ok: [testbed-manager] =>  2026-01-13 00:29:25.048850 | orchestrator |  docker_version: 5:27.5.1 2026-01-13 00:29:25.048853 | orchestrator | ok: [testbed-node-3] =>  2026-01-13 00:29:25.048857 | orchestrator |  docker_version: 5:27.5.1 2026-01-13 00:29:25.048861 | orchestrator | ok: [testbed-node-4] =>  2026-01-13 00:29:25.048865 | orchestrator |  docker_version: 5:27.5.1 2026-01-13 00:29:25.048868 | orchestrator | ok: [testbed-node-5] =>  2026-01-13 00:29:25.048872 | orchestrator |  docker_version: 5:27.5.1 2026-01-13 00:29:25.048888 | orchestrator | ok: [testbed-node-0] =>  2026-01-13 00:29:25.048892 | orchestrator |  docker_version: 5:27.5.1 2026-01-13 00:29:25.048896 | orchestrator | ok: [testbed-node-1] =>  2026-01-13 00:29:25.048900 | orchestrator |  docker_version: 5:27.5.1 2026-01-13 00:29:25.048903 | orchestrator | ok: [testbed-node-2] =>  2026-01-13 00:29:25.048907 | orchestrator |  docker_version: 5:27.5.1 2026-01-13 00:29:25.048911 | orchestrator | 2026-01-13 00:29:25.048915 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-13 00:29:25.048919 | orchestrator | Tuesday 13 January 2026 00:29:19 +0000 (0:00:00.290) 0:05:16.203 ******* 2026-01-13 00:29:25.048922 | orchestrator | ok: [testbed-manager] =>  2026-01-13 00:29:25.048926 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-13 00:29:25.048930 | orchestrator | ok: [testbed-node-3] =>  2026-01-13 00:29:25.048935 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-13 00:29:25.048939 | orchestrator | ok: [testbed-node-4] =>  2026-01-13 00:29:25.048943 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-13 00:29:25.048948 | orchestrator | ok: [testbed-node-5] =>  2026-01-13 00:29:25.048952 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-13 00:29:25.048957 | orchestrator | ok: [testbed-node-0] =>  2026-01-13 00:29:25.048961 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-13 00:29:25.048966 | orchestrator | ok: [testbed-node-1] =>  2026-01-13 00:29:25.048974 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-13 00:29:25.048979 | orchestrator | ok: [testbed-node-2] =>  2026-01-13 00:29:25.048983 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-13 00:29:25.048987 | orchestrator | 2026-01-13 00:29:25.048992 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-13 00:29:25.048996 | orchestrator | Tuesday 13 January 2026 00:29:19 +0000 (0:00:00.292) 0:05:16.495 ******* 2026-01-13 00:29:25.049001 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:29:25.049005 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:29:25.049009 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:29:25.049014 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:29:25.049018 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:29:25.049022 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:29:25.049027 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:29:25.049031 | orchestrator | 2026-01-13 00:29:25.049036 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-13 00:29:25.049040 | orchestrator | Tuesday 13 January 2026 00:29:20 +0000 (0:00:00.267) 0:05:16.763 ******* 2026-01-13 00:29:25.049045 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:29:25.049049 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:29:25.049054 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:29:25.049058 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:29:25.049062 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:29:25.049066 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:29:25.049071 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:29:25.049075 | orchestrator | 2026-01-13 00:29:25.049080 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-13 00:29:25.049084 | orchestrator | Tuesday 13 January 2026 00:29:20 +0000 (0:00:00.306) 0:05:17.070 ******* 2026-01-13 00:29:25.049090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:29:25.049097 | orchestrator | 2026-01-13 00:29:25.049101 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-13 00:29:25.049106 | orchestrator | Tuesday 13 January 2026 00:29:20 +0000 (0:00:00.409) 0:05:17.479 ******* 2026-01-13 00:29:25.049110 | orchestrator | ok: [testbed-manager] 2026-01-13 00:29:25.049115 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:29:25.049119 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:29:25.049124 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:29:25.049128 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:29:25.049133 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:29:25.049137 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:29:25.049141 | orchestrator | 2026-01-13 00:29:25.049146 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-13 00:29:25.049150 | orchestrator | Tuesday 13 January 2026 00:29:21 +0000 (0:00:00.982) 0:05:18.462 ******* 2026-01-13 00:29:25.049155 | orchestrator | ok: [testbed-manager] 2026-01-13 00:29:25.049159 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:29:25.049163 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:29:25.049168 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:29:25.049172 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:29:25.049176 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:29:25.049181 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:29:25.049185 | orchestrator | 2026-01-13 00:29:25.049189 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-13 00:29:25.049194 | orchestrator | Tuesday 13 January 2026 00:29:24 +0000 (0:00:02.877) 0:05:21.339 ******* 2026-01-13 00:29:25.049199 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-13 00:29:25.049204 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-13 00:29:25.049208 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-13 00:29:25.049218 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-13 00:29:25.049223 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-13 00:29:25.049228 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-13 00:29:25.049232 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:29:25.049237 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-13 00:29:25.049241 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-13 00:29:25.049269 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:29:25.049276 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-13 00:29:25.049282 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-13 00:29:25.049287 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-13 00:29:25.049291 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-13 00:29:25.049296 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:29:25.049300 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-13 00:29:25.049308 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-13 00:30:30.383927 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-13 00:30:30.384018 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:30:30.384029 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-13 00:30:30.384037 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-13 00:30:30.384044 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-13 00:30:30.384052 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:30:30.384059 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:30:30.384066 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-13 00:30:30.384073 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-13 00:30:30.384080 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-13 00:30:30.384087 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:30:30.384094 | orchestrator | 2026-01-13 00:30:30.384103 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-13 00:30:30.384111 | orchestrator | Tuesday 13 January 2026 00:29:25 +0000 (0:00:00.566) 0:05:21.906 ******* 2026-01-13 00:30:30.384118 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:30.384125 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:30.384132 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:30.384139 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:30.384146 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:30.384153 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:30.384159 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:30.384166 | orchestrator | 2026-01-13 00:30:30.384173 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-13 00:30:30.384180 | orchestrator | Tuesday 13 January 2026 00:29:32 +0000 (0:00:07.060) 0:05:28.967 ******* 2026-01-13 00:30:30.384187 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:30.384193 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:30.384199 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:30.384206 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:30.384212 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:30.384219 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:30.384305 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:30.384313 | orchestrator | 2026-01-13 00:30:30.384320 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-13 00:30:30.384327 | orchestrator | Tuesday 13 January 2026 00:29:33 +0000 (0:00:01.162) 0:05:30.129 ******* 2026-01-13 00:30:30.384334 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:30.384340 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:30.384347 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:30.384354 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:30.384361 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:30.384368 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:30.384396 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:30.384403 | orchestrator | 2026-01-13 00:30:30.384410 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-13 00:30:30.384416 | orchestrator | Tuesday 13 January 2026 00:29:41 +0000 (0:00:08.540) 0:05:38.669 ******* 2026-01-13 00:30:30.384422 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:30.384429 | orchestrator | changed: [testbed-manager] 2026-01-13 00:30:30.384436 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:30.384443 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:30.384450 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:30.384457 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:30.384464 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:30.384471 | orchestrator | 2026-01-13 00:30:30.384478 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-13 00:30:30.384485 | orchestrator | Tuesday 13 January 2026 00:29:45 +0000 (0:00:03.441) 0:05:42.111 ******* 2026-01-13 00:30:30.384492 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:30.384499 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:30.384506 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:30.384513 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:30.384519 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:30.384526 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:30.384533 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:30.384540 | orchestrator | 2026-01-13 00:30:30.384547 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-13 00:30:30.384553 | orchestrator | Tuesday 13 January 2026 00:29:46 +0000 (0:00:01.401) 0:05:43.512 ******* 2026-01-13 00:30:30.384560 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:30.384567 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:30.384574 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:30.384581 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:30.384588 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:30.384595 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:30.384601 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:30.384608 | orchestrator | 2026-01-13 00:30:30.384615 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-13 00:30:30.384622 | orchestrator | Tuesday 13 January 2026 00:29:48 +0000 (0:00:01.520) 0:05:45.033 ******* 2026-01-13 00:30:30.384629 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:30:30.384648 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:30:30.384655 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:30:30.384662 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:30:30.384669 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:30:30.384676 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:30:30.384684 | orchestrator | changed: [testbed-manager] 2026-01-13 00:30:30.384690 | orchestrator | 2026-01-13 00:30:30.384697 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-13 00:30:30.384704 | orchestrator | Tuesday 13 January 2026 00:29:48 +0000 (0:00:00.627) 0:05:45.660 ******* 2026-01-13 00:30:30.384711 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:30.384718 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:30.384725 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:30.384732 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:30.384739 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:30.384746 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:30.384753 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:30.384759 | orchestrator | 2026-01-13 00:30:30.384766 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-13 00:30:30.384788 | orchestrator | Tuesday 13 January 2026 00:29:59 +0000 (0:00:10.733) 0:05:56.394 ******* 2026-01-13 00:30:30.384795 | orchestrator | changed: [testbed-manager] 2026-01-13 00:30:30.384802 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:30.384809 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:30.384821 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:30.384826 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:30.384832 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:30.384838 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:30.384843 | orchestrator | 2026-01-13 00:30:30.384849 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-13 00:30:30.384855 | orchestrator | Tuesday 13 January 2026 00:30:00 +0000 (0:00:00.904) 0:05:57.299 ******* 2026-01-13 00:30:30.384861 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:30.384868 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:30.384875 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:30.384882 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:30.384889 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:30.384896 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:30.384902 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:30.384909 | orchestrator | 2026-01-13 00:30:30.384916 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-13 00:30:30.384923 | orchestrator | Tuesday 13 January 2026 00:30:11 +0000 (0:00:10.399) 0:06:07.699 ******* 2026-01-13 00:30:30.384930 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:30.384937 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:30.384943 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:30.384950 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:30.384956 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:30.384963 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:30.384969 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:30.384976 | orchestrator | 2026-01-13 00:30:30.384983 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-13 00:30:30.384989 | orchestrator | Tuesday 13 January 2026 00:30:23 +0000 (0:00:12.549) 0:06:20.248 ******* 2026-01-13 00:30:30.384996 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-13 00:30:30.385003 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-13 00:30:30.385009 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-13 00:30:30.385016 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-13 00:30:30.385022 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-13 00:30:30.385028 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-13 00:30:30.385035 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-13 00:30:30.385042 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-13 00:30:30.385049 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-13 00:30:30.385055 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-13 00:30:30.385062 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-13 00:30:30.385069 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-13 00:30:30.385076 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-13 00:30:30.385083 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-13 00:30:30.385090 | orchestrator | 2026-01-13 00:30:30.385097 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-13 00:30:30.385103 | orchestrator | Tuesday 13 January 2026 00:30:24 +0000 (0:00:01.334) 0:06:21.583 ******* 2026-01-13 00:30:30.385110 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:30:30.385117 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:30:30.385124 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:30:30.385131 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:30:30.385137 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:30:30.385144 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:30:30.385151 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:30:30.385158 | orchestrator | 2026-01-13 00:30:30.385165 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-13 00:30:30.385171 | orchestrator | Tuesday 13 January 2026 00:30:25 +0000 (0:00:00.521) 0:06:22.104 ******* 2026-01-13 00:30:30.385183 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:30.385190 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:30.385197 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:30.385204 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:30.385211 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:30.385218 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:30.385246 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:30.385252 | orchestrator | 2026-01-13 00:30:30.385258 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-13 00:30:30.385265 | orchestrator | Tuesday 13 January 2026 00:30:29 +0000 (0:00:04.013) 0:06:26.118 ******* 2026-01-13 00:30:30.385272 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:30:30.385278 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:30:30.385285 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:30:30.385291 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:30:30.385298 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:30:30.385304 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:30:30.385311 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:30:30.385318 | orchestrator | 2026-01-13 00:30:30.385325 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-13 00:30:30.385332 | orchestrator | Tuesday 13 January 2026 00:30:29 +0000 (0:00:00.494) 0:06:26.612 ******* 2026-01-13 00:30:30.385339 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-13 00:30:30.385347 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-13 00:30:30.385354 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:30:30.385361 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-13 00:30:30.385367 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-13 00:30:30.385375 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:30:30.385417 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-13 00:30:30.385425 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-13 00:30:30.385432 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:30:30.385446 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-13 00:30:49.896843 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-13 00:30:49.896951 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:30:49.896966 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-13 00:30:49.896977 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-13 00:30:49.896987 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:30:49.896997 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-13 00:30:49.897007 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-13 00:30:49.897017 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:30:49.897026 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-13 00:30:49.897036 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-13 00:30:49.897045 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:30:49.897055 | orchestrator | 2026-01-13 00:30:49.897067 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-13 00:30:49.897078 | orchestrator | Tuesday 13 January 2026 00:30:30 +0000 (0:00:00.714) 0:06:27.327 ******* 2026-01-13 00:30:49.897088 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:30:49.897097 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:30:49.897107 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:30:49.897116 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:30:49.897126 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:30:49.897135 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:30:49.897144 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:30:49.897154 | orchestrator | 2026-01-13 00:30:49.897164 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-13 00:30:49.897194 | orchestrator | Tuesday 13 January 2026 00:30:31 +0000 (0:00:00.489) 0:06:27.816 ******* 2026-01-13 00:30:49.897205 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:30:49.897312 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:30:49.897331 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:30:49.897347 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:30:49.897364 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:30:49.897374 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:30:49.897386 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:30:49.897397 | orchestrator | 2026-01-13 00:30:49.897408 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-13 00:30:49.897419 | orchestrator | Tuesday 13 January 2026 00:30:31 +0000 (0:00:00.478) 0:06:28.295 ******* 2026-01-13 00:30:49.897430 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:30:49.897441 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:30:49.897452 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:30:49.897463 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:30:49.897474 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:30:49.897485 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:30:49.897495 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:30:49.897506 | orchestrator | 2026-01-13 00:30:49.897518 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-13 00:30:49.897529 | orchestrator | Tuesday 13 January 2026 00:30:32 +0000 (0:00:00.531) 0:06:28.827 ******* 2026-01-13 00:30:49.897540 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:49.897552 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:30:49.897562 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:30:49.897574 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:30:49.897584 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:30:49.897595 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:30:49.897605 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:30:49.897615 | orchestrator | 2026-01-13 00:30:49.897626 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-13 00:30:49.897637 | orchestrator | Tuesday 13 January 2026 00:30:34 +0000 (0:00:02.211) 0:06:31.038 ******* 2026-01-13 00:30:49.897650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:30:49.897663 | orchestrator | 2026-01-13 00:30:49.897674 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-13 00:30:49.897685 | orchestrator | Tuesday 13 January 2026 00:30:35 +0000 (0:00:00.825) 0:06:31.864 ******* 2026-01-13 00:30:49.897696 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:49.897708 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:49.897717 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:49.897726 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:49.897736 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:49.897745 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:49.897754 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:49.897764 | orchestrator | 2026-01-13 00:30:49.897773 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-13 00:30:49.897783 | orchestrator | Tuesday 13 January 2026 00:30:36 +0000 (0:00:00.848) 0:06:32.712 ******* 2026-01-13 00:30:49.897806 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:49.897817 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:49.897828 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:49.897838 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:49.897849 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:49.897859 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:49.897870 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:49.897880 | orchestrator | 2026-01-13 00:30:49.897891 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-13 00:30:49.897912 | orchestrator | Tuesday 13 January 2026 00:30:36 +0000 (0:00:00.868) 0:06:33.581 ******* 2026-01-13 00:30:49.897923 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:49.897933 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:49.897944 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:49.897954 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:49.897965 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:49.897975 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:49.897986 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:49.897996 | orchestrator | 2026-01-13 00:30:49.898007 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-13 00:30:49.898098 | orchestrator | Tuesday 13 January 2026 00:30:38 +0000 (0:00:01.517) 0:06:35.099 ******* 2026-01-13 00:30:49.898114 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:30:49.898125 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:30:49.898136 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:30:49.898146 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:30:49.898157 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:30:49.898168 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:30:49.898178 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:30:49.898189 | orchestrator | 2026-01-13 00:30:49.898200 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-13 00:30:49.898237 | orchestrator | Tuesday 13 January 2026 00:30:39 +0000 (0:00:01.480) 0:06:36.579 ******* 2026-01-13 00:30:49.898258 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:49.898269 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:49.898280 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:49.898291 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:49.898301 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:49.898312 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:49.898322 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:49.898332 | orchestrator | 2026-01-13 00:30:49.898343 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-13 00:30:49.898354 | orchestrator | Tuesday 13 January 2026 00:30:41 +0000 (0:00:01.282) 0:06:37.862 ******* 2026-01-13 00:30:49.898364 | orchestrator | changed: [testbed-manager] 2026-01-13 00:30:49.898374 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:30:49.898385 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:30:49.898395 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:30:49.898406 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:30:49.898416 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:30:49.898427 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:30:49.898437 | orchestrator | 2026-01-13 00:30:49.898448 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-13 00:30:49.898458 | orchestrator | Tuesday 13 January 2026 00:30:42 +0000 (0:00:01.413) 0:06:39.276 ******* 2026-01-13 00:30:49.898470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:30:49.898480 | orchestrator | 2026-01-13 00:30:49.898491 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-13 00:30:49.898502 | orchestrator | Tuesday 13 January 2026 00:30:43 +0000 (0:00:01.003) 0:06:40.279 ******* 2026-01-13 00:30:49.898512 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:49.898523 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:30:49.898534 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:30:49.898544 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:30:49.898555 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:30:49.898565 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:30:49.898575 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:30:49.898586 | orchestrator | 2026-01-13 00:30:49.898597 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-13 00:30:49.898607 | orchestrator | Tuesday 13 January 2026 00:30:45 +0000 (0:00:01.420) 0:06:41.699 ******* 2026-01-13 00:30:49.898633 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:49.898644 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:30:49.898654 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:30:49.898665 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:30:49.898675 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:30:49.898685 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:30:49.898696 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:30:49.898706 | orchestrator | 2026-01-13 00:30:49.898717 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-13 00:30:49.898727 | orchestrator | Tuesday 13 January 2026 00:30:46 +0000 (0:00:01.211) 0:06:42.911 ******* 2026-01-13 00:30:49.898738 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:49.898748 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:30:49.898758 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:30:49.898769 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:30:49.898779 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:30:49.898790 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:30:49.898800 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:30:49.898810 | orchestrator | 2026-01-13 00:30:49.898821 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-13 00:30:49.898832 | orchestrator | Tuesday 13 January 2026 00:30:47 +0000 (0:00:01.182) 0:06:44.093 ******* 2026-01-13 00:30:49.898843 | orchestrator | ok: [testbed-manager] 2026-01-13 00:30:49.898853 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:30:49.898864 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:30:49.898874 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:30:49.898885 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:30:49.898895 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:30:49.898905 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:30:49.898916 | orchestrator | 2026-01-13 00:30:49.898926 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-13 00:30:49.898937 | orchestrator | Tuesday 13 January 2026 00:30:48 +0000 (0:00:01.309) 0:06:45.403 ******* 2026-01-13 00:30:49.898948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:30:49.898958 | orchestrator | 2026-01-13 00:30:49.898969 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-13 00:30:49.898980 | orchestrator | Tuesday 13 January 2026 00:30:49 +0000 (0:00:00.860) 0:06:46.264 ******* 2026-01-13 00:30:49.898990 | orchestrator | 2026-01-13 00:30:49.899001 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-13 00:30:49.899012 | orchestrator | Tuesday 13 January 2026 00:30:49 +0000 (0:00:00.039) 0:06:46.304 ******* 2026-01-13 00:30:49.899022 | orchestrator | 2026-01-13 00:30:49.899033 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-13 00:30:49.899043 | orchestrator | Tuesday 13 January 2026 00:30:49 +0000 (0:00:00.037) 0:06:46.342 ******* 2026-01-13 00:30:49.899054 | orchestrator | 2026-01-13 00:30:49.899065 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-13 00:30:49.899083 | orchestrator | Tuesday 13 January 2026 00:30:49 +0000 (0:00:00.044) 0:06:46.386 ******* 2026-01-13 00:31:16.867156 | orchestrator | 2026-01-13 00:31:16.867393 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-13 00:31:16.867413 | orchestrator | Tuesday 13 January 2026 00:30:49 +0000 (0:00:00.038) 0:06:46.424 ******* 2026-01-13 00:31:16.867424 | orchestrator | 2026-01-13 00:31:16.867434 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-13 00:31:16.867444 | orchestrator | Tuesday 13 January 2026 00:30:49 +0000 (0:00:00.037) 0:06:46.462 ******* 2026-01-13 00:31:16.867454 | orchestrator | 2026-01-13 00:31:16.867463 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-13 00:31:16.867473 | orchestrator | Tuesday 13 January 2026 00:30:49 +0000 (0:00:00.044) 0:06:46.507 ******* 2026-01-13 00:31:16.867505 | orchestrator | 2026-01-13 00:31:16.867516 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-13 00:31:16.867525 | orchestrator | Tuesday 13 January 2026 00:30:49 +0000 (0:00:00.038) 0:06:46.545 ******* 2026-01-13 00:31:16.867535 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:31:16.867546 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:31:16.867556 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:31:16.867565 | orchestrator | 2026-01-13 00:31:16.867575 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-13 00:31:16.867585 | orchestrator | Tuesday 13 January 2026 00:30:51 +0000 (0:00:01.380) 0:06:47.926 ******* 2026-01-13 00:31:16.867594 | orchestrator | changed: [testbed-manager] 2026-01-13 00:31:16.867605 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:31:16.867614 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:31:16.867624 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:31:16.867633 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:31:16.867642 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:31:16.867652 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:31:16.867661 | orchestrator | 2026-01-13 00:31:16.867670 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-13 00:31:16.867680 | orchestrator | Tuesday 13 January 2026 00:30:52 +0000 (0:00:01.482) 0:06:49.408 ******* 2026-01-13 00:31:16.867690 | orchestrator | changed: [testbed-manager] 2026-01-13 00:31:16.867700 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:31:16.867709 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:31:16.867719 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:31:16.867728 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:31:16.867737 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:31:16.867747 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:31:16.867756 | orchestrator | 2026-01-13 00:31:16.867766 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-13 00:31:16.867775 | orchestrator | Tuesday 13 January 2026 00:30:54 +0000 (0:00:01.423) 0:06:50.832 ******* 2026-01-13 00:31:16.867785 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:31:16.867794 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:31:16.867803 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:31:16.867812 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:31:16.867822 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:31:16.867831 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:31:16.867841 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:31:16.867850 | orchestrator | 2026-01-13 00:31:16.867860 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-13 00:31:16.867869 | orchestrator | Tuesday 13 January 2026 00:30:56 +0000 (0:00:02.309) 0:06:53.141 ******* 2026-01-13 00:31:16.867879 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:31:16.867888 | orchestrator | 2026-01-13 00:31:16.867898 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-13 00:31:16.867907 | orchestrator | Tuesday 13 January 2026 00:30:56 +0000 (0:00:00.105) 0:06:53.247 ******* 2026-01-13 00:31:16.867917 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:16.867926 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:31:16.867935 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:31:16.867945 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:31:16.867954 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:31:16.867964 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:31:16.867973 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:31:16.867982 | orchestrator | 2026-01-13 00:31:16.867993 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-13 00:31:16.868003 | orchestrator | Tuesday 13 January 2026 00:30:57 +0000 (0:00:01.066) 0:06:54.313 ******* 2026-01-13 00:31:16.868012 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:31:16.868021 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:31:16.868031 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:31:16.868040 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:31:16.868058 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:31:16.868067 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:31:16.868077 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:31:16.868086 | orchestrator | 2026-01-13 00:31:16.868096 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-13 00:31:16.868106 | orchestrator | Tuesday 13 January 2026 00:30:58 +0000 (0:00:00.506) 0:06:54.819 ******* 2026-01-13 00:31:16.868131 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:31:16.868143 | orchestrator | 2026-01-13 00:31:16.868153 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-13 00:31:16.868163 | orchestrator | Tuesday 13 January 2026 00:30:59 +0000 (0:00:01.037) 0:06:55.857 ******* 2026-01-13 00:31:16.868172 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:16.868182 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:31:16.868222 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:31:16.868238 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:31:16.868248 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:31:16.868258 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:31:16.868267 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:31:16.868277 | orchestrator | 2026-01-13 00:31:16.868287 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-13 00:31:16.868297 | orchestrator | Tuesday 13 January 2026 00:31:00 +0000 (0:00:00.837) 0:06:56.695 ******* 2026-01-13 00:31:16.868307 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-13 00:31:16.868335 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-13 00:31:16.868345 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-13 00:31:16.868355 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-13 00:31:16.868364 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-13 00:31:16.868374 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-13 00:31:16.868383 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-13 00:31:16.868393 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-13 00:31:16.868403 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-13 00:31:16.868412 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-13 00:31:16.868422 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-13 00:31:16.868431 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-13 00:31:16.868440 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-13 00:31:16.868450 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-13 00:31:16.868460 | orchestrator | 2026-01-13 00:31:16.868469 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-13 00:31:16.868479 | orchestrator | Tuesday 13 January 2026 00:31:02 +0000 (0:00:02.566) 0:06:59.261 ******* 2026-01-13 00:31:16.868488 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:31:16.868498 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:31:16.868507 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:31:16.868517 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:31:16.868526 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:31:16.868536 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:31:16.868546 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:31:16.868555 | orchestrator | 2026-01-13 00:31:16.868565 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-13 00:31:16.868574 | orchestrator | Tuesday 13 January 2026 00:31:03 +0000 (0:00:00.634) 0:06:59.895 ******* 2026-01-13 00:31:16.868586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:31:16.868605 | orchestrator | 2026-01-13 00:31:16.868615 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-13 00:31:16.868625 | orchestrator | Tuesday 13 January 2026 00:31:03 +0000 (0:00:00.771) 0:07:00.667 ******* 2026-01-13 00:31:16.868634 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:16.868644 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:31:16.868653 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:31:16.868662 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:31:16.868672 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:31:16.868683 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:31:16.868699 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:31:16.868715 | orchestrator | 2026-01-13 00:31:16.868731 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-13 00:31:16.868746 | orchestrator | Tuesday 13 January 2026 00:31:04 +0000 (0:00:00.828) 0:07:01.495 ******* 2026-01-13 00:31:16.868761 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:16.868776 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:31:16.868791 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:31:16.868807 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:31:16.868823 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:31:16.868841 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:31:16.868858 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:31:16.868873 | orchestrator | 2026-01-13 00:31:16.868883 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-13 00:31:16.868892 | orchestrator | Tuesday 13 January 2026 00:31:05 +0000 (0:00:01.033) 0:07:02.528 ******* 2026-01-13 00:31:16.868902 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:31:16.868911 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:31:16.868920 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:31:16.868930 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:31:16.868940 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:31:16.868949 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:31:16.868958 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:31:16.868968 | orchestrator | 2026-01-13 00:31:16.868977 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-13 00:31:16.868987 | orchestrator | Tuesday 13 January 2026 00:31:06 +0000 (0:00:00.472) 0:07:03.001 ******* 2026-01-13 00:31:16.868997 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:16.869006 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:31:16.869016 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:31:16.869025 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:31:16.869035 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:31:16.869044 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:31:16.869053 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:31:16.869063 | orchestrator | 2026-01-13 00:31:16.869078 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-13 00:31:16.869088 | orchestrator | Tuesday 13 January 2026 00:31:07 +0000 (0:00:01.625) 0:07:04.626 ******* 2026-01-13 00:31:16.869098 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:31:16.869107 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:31:16.869117 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:31:16.869126 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:31:16.869135 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:31:16.869145 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:31:16.869154 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:31:16.869164 | orchestrator | 2026-01-13 00:31:16.869176 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-13 00:31:16.869220 | orchestrator | Tuesday 13 January 2026 00:31:08 +0000 (0:00:00.481) 0:07:05.108 ******* 2026-01-13 00:31:16.869238 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:16.869253 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:31:16.869268 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:31:16.869284 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:31:16.869311 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:31:16.869328 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:31:16.869356 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:31:49.952286 | orchestrator | 2026-01-13 00:31:49.952376 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-13 00:31:49.952389 | orchestrator | Tuesday 13 January 2026 00:31:16 +0000 (0:00:08.415) 0:07:13.523 ******* 2026-01-13 00:31:49.952397 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:49.952406 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:31:49.952413 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:31:49.952421 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:31:49.952428 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:31:49.952435 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:31:49.952442 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:31:49.952450 | orchestrator | 2026-01-13 00:31:49.952457 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-13 00:31:49.952465 | orchestrator | Tuesday 13 January 2026 00:31:18 +0000 (0:00:01.527) 0:07:15.051 ******* 2026-01-13 00:31:49.952472 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:49.952479 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:31:49.952486 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:31:49.952493 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:31:49.952500 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:31:49.952507 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:31:49.952515 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:31:49.952522 | orchestrator | 2026-01-13 00:31:49.952529 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-13 00:31:49.952536 | orchestrator | Tuesday 13 January 2026 00:31:20 +0000 (0:00:01.890) 0:07:16.942 ******* 2026-01-13 00:31:49.952544 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:49.952551 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:31:49.952559 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:31:49.952566 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:31:49.952573 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:31:49.952580 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:31:49.952587 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:31:49.952594 | orchestrator | 2026-01-13 00:31:49.952602 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-13 00:31:49.952609 | orchestrator | Tuesday 13 January 2026 00:31:21 +0000 (0:00:01.637) 0:07:18.579 ******* 2026-01-13 00:31:49.952616 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:49.952623 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:31:49.952631 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:31:49.952638 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:31:49.952645 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:31:49.952652 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:31:49.952659 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:31:49.952666 | orchestrator | 2026-01-13 00:31:49.952674 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-13 00:31:49.952681 | orchestrator | Tuesday 13 January 2026 00:31:22 +0000 (0:00:00.952) 0:07:19.531 ******* 2026-01-13 00:31:49.952688 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:31:49.952696 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:31:49.952703 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:31:49.952710 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:31:49.952717 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:31:49.952724 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:31:49.952732 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:31:49.952739 | orchestrator | 2026-01-13 00:31:49.952746 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-13 00:31:49.952753 | orchestrator | Tuesday 13 January 2026 00:31:23 +0000 (0:00:00.969) 0:07:20.501 ******* 2026-01-13 00:31:49.952761 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:31:49.952768 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:31:49.952796 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:31:49.952806 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:31:49.952814 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:31:49.952822 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:31:49.952831 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:31:49.952839 | orchestrator | 2026-01-13 00:31:49.952847 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-13 00:31:49.952856 | orchestrator | Tuesday 13 January 2026 00:31:24 +0000 (0:00:00.552) 0:07:21.053 ******* 2026-01-13 00:31:49.952864 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:49.952872 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:31:49.952880 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:31:49.952888 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:31:49.952896 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:31:49.952905 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:31:49.952913 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:31:49.952921 | orchestrator | 2026-01-13 00:31:49.952929 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-13 00:31:49.952938 | orchestrator | Tuesday 13 January 2026 00:31:24 +0000 (0:00:00.512) 0:07:21.566 ******* 2026-01-13 00:31:49.952946 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:49.952954 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:31:49.952963 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:31:49.952971 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:31:49.952979 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:31:49.952987 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:31:49.952995 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:31:49.953003 | orchestrator | 2026-01-13 00:31:49.953012 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-13 00:31:49.953020 | orchestrator | Tuesday 13 January 2026 00:31:25 +0000 (0:00:00.530) 0:07:22.096 ******* 2026-01-13 00:31:49.953028 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:49.953037 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:31:49.953045 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:31:49.953053 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:31:49.953061 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:31:49.953069 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:31:49.953077 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:31:49.953085 | orchestrator | 2026-01-13 00:31:49.953094 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-13 00:31:49.953102 | orchestrator | Tuesday 13 January 2026 00:31:26 +0000 (0:00:00.712) 0:07:22.808 ******* 2026-01-13 00:31:49.953111 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:49.953119 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:31:49.953128 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:31:49.953136 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:31:49.953144 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:31:49.953152 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:31:49.953159 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:31:49.953214 | orchestrator | 2026-01-13 00:31:49.953234 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-13 00:31:49.953242 | orchestrator | Tuesday 13 January 2026 00:31:31 +0000 (0:00:05.550) 0:07:28.359 ******* 2026-01-13 00:31:49.953249 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:31:49.953257 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:31:49.953264 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:31:49.953271 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:31:49.953279 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:31:49.953286 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:31:49.953293 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:31:49.953300 | orchestrator | 2026-01-13 00:31:49.953322 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-13 00:31:49.953330 | orchestrator | Tuesday 13 January 2026 00:31:32 +0000 (0:00:00.608) 0:07:28.967 ******* 2026-01-13 00:31:49.953339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:31:49.953355 | orchestrator | 2026-01-13 00:31:49.953363 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-13 00:31:49.953370 | orchestrator | Tuesday 13 January 2026 00:31:33 +0000 (0:00:01.134) 0:07:30.102 ******* 2026-01-13 00:31:49.953377 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:49.953384 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:31:49.953391 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:31:49.953399 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:31:49.953406 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:31:49.953413 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:31:49.953420 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:31:49.953427 | orchestrator | 2026-01-13 00:31:49.953434 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-13 00:31:49.953441 | orchestrator | Tuesday 13 January 2026 00:31:35 +0000 (0:00:01.982) 0:07:32.084 ******* 2026-01-13 00:31:49.953448 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:49.953455 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:31:49.953463 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:31:49.953470 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:31:49.953477 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:31:49.953484 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:31:49.953491 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:31:49.953498 | orchestrator | 2026-01-13 00:31:49.953505 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-13 00:31:49.953512 | orchestrator | Tuesday 13 January 2026 00:31:36 +0000 (0:00:01.186) 0:07:33.271 ******* 2026-01-13 00:31:49.953519 | orchestrator | ok: [testbed-manager] 2026-01-13 00:31:49.953527 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:31:49.953534 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:31:49.953541 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:31:49.953548 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:31:49.953555 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:31:49.953562 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:31:49.953569 | orchestrator | 2026-01-13 00:31:49.953576 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-13 00:31:49.953584 | orchestrator | Tuesday 13 January 2026 00:31:37 +0000 (0:00:00.854) 0:07:34.126 ******* 2026-01-13 00:31:49.953591 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-13 00:31:49.953601 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-13 00:31:49.953608 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-13 00:31:49.953615 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-13 00:31:49.953622 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-13 00:31:49.953630 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-13 00:31:49.953637 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-13 00:31:49.953644 | orchestrator | 2026-01-13 00:31:49.953651 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-13 00:31:49.953662 | orchestrator | Tuesday 13 January 2026 00:31:39 +0000 (0:00:01.937) 0:07:36.063 ******* 2026-01-13 00:31:49.953669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:31:49.953683 | orchestrator | 2026-01-13 00:31:49.953691 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-13 00:31:49.953698 | orchestrator | Tuesday 13 January 2026 00:31:40 +0000 (0:00:00.900) 0:07:36.963 ******* 2026-01-13 00:31:49.953705 | orchestrator | changed: [testbed-manager] 2026-01-13 00:31:49.953713 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:31:49.953720 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:31:49.953727 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:31:49.953734 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:31:49.953741 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:31:49.953748 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:31:49.953755 | orchestrator | 2026-01-13 00:31:49.953767 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-13 00:32:20.835125 | orchestrator | Tuesday 13 January 2026 00:31:49 +0000 (0:00:09.644) 0:07:46.607 ******* 2026-01-13 00:32:20.835327 | orchestrator | ok: [testbed-manager] 2026-01-13 00:32:20.835345 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:32:20.835357 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:32:20.835367 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:32:20.835379 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:32:20.835389 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:32:20.835400 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:32:20.835411 | orchestrator | 2026-01-13 00:32:20.835423 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-13 00:32:20.835434 | orchestrator | Tuesday 13 January 2026 00:31:51 +0000 (0:00:01.880) 0:07:48.488 ******* 2026-01-13 00:32:20.835445 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:32:20.835456 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:32:20.835467 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:32:20.835478 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:32:20.835489 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:32:20.835499 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:32:20.835617 | orchestrator | 2026-01-13 00:32:20.835630 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-13 00:32:20.835641 | orchestrator | Tuesday 13 January 2026 00:31:53 +0000 (0:00:01.367) 0:07:49.855 ******* 2026-01-13 00:32:20.835652 | orchestrator | changed: [testbed-manager] 2026-01-13 00:32:20.835665 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:32:20.835676 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:32:20.835686 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:32:20.835697 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:32:20.835708 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:32:20.835718 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:32:20.835729 | orchestrator | 2026-01-13 00:32:20.835740 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-13 00:32:20.835750 | orchestrator | 2026-01-13 00:32:20.835761 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-13 00:32:20.835772 | orchestrator | Tuesday 13 January 2026 00:31:54 +0000 (0:00:01.277) 0:07:51.133 ******* 2026-01-13 00:32:20.835783 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:32:20.835794 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:32:20.835804 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:32:20.835815 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:32:20.835826 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:32:20.835836 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:32:20.835847 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:32:20.835857 | orchestrator | 2026-01-13 00:32:20.835868 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-13 00:32:20.835879 | orchestrator | 2026-01-13 00:32:20.835890 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-13 00:32:20.835901 | orchestrator | Tuesday 13 January 2026 00:31:55 +0000 (0:00:00.668) 0:07:51.802 ******* 2026-01-13 00:32:20.835939 | orchestrator | changed: [testbed-manager] 2026-01-13 00:32:20.835950 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:32:20.835961 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:32:20.835971 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:32:20.835982 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:32:20.835993 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:32:20.836003 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:32:20.836014 | orchestrator | 2026-01-13 00:32:20.836025 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-13 00:32:20.836036 | orchestrator | Tuesday 13 January 2026 00:31:56 +0000 (0:00:01.433) 0:07:53.235 ******* 2026-01-13 00:32:20.836046 | orchestrator | ok: [testbed-manager] 2026-01-13 00:32:20.836057 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:32:20.836068 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:32:20.836078 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:32:20.836089 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:32:20.836099 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:32:20.836110 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:32:20.836120 | orchestrator | 2026-01-13 00:32:20.836131 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-13 00:32:20.836168 | orchestrator | Tuesday 13 January 2026 00:31:57 +0000 (0:00:01.414) 0:07:54.650 ******* 2026-01-13 00:32:20.836179 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:32:20.836190 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:32:20.836201 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:32:20.836211 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:32:20.836222 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:32:20.836232 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:32:20.836243 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:32:20.836253 | orchestrator | 2026-01-13 00:32:20.836264 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-13 00:32:20.836275 | orchestrator | Tuesday 13 January 2026 00:31:58 +0000 (0:00:00.475) 0:07:55.125 ******* 2026-01-13 00:32:20.836286 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:32:20.836298 | orchestrator | 2026-01-13 00:32:20.836324 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-13 00:32:20.836335 | orchestrator | Tuesday 13 January 2026 00:31:59 +0000 (0:00:00.918) 0:07:56.043 ******* 2026-01-13 00:32:20.836348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:32:20.836361 | orchestrator | 2026-01-13 00:32:20.836372 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-13 00:32:20.836383 | orchestrator | Tuesday 13 January 2026 00:32:00 +0000 (0:00:00.758) 0:07:56.802 ******* 2026-01-13 00:32:20.836394 | orchestrator | changed: [testbed-manager] 2026-01-13 00:32:20.836404 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:32:20.836415 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:32:20.836425 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:32:20.836436 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:32:20.836447 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:32:20.836457 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:32:20.836468 | orchestrator | 2026-01-13 00:32:20.836498 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-13 00:32:20.836510 | orchestrator | Tuesday 13 January 2026 00:32:09 +0000 (0:00:09.030) 0:08:05.832 ******* 2026-01-13 00:32:20.836521 | orchestrator | changed: [testbed-manager] 2026-01-13 00:32:20.836531 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:32:20.836542 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:32:20.836553 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:32:20.836572 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:32:20.836583 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:32:20.836593 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:32:20.836615 | orchestrator | 2026-01-13 00:32:20.836627 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-13 00:32:20.836638 | orchestrator | Tuesday 13 January 2026 00:32:10 +0000 (0:00:01.053) 0:08:06.886 ******* 2026-01-13 00:32:20.836649 | orchestrator | changed: [testbed-manager] 2026-01-13 00:32:20.836659 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:32:20.836670 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:32:20.836680 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:32:20.836691 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:32:20.836701 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:32:20.836712 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:32:20.836723 | orchestrator | 2026-01-13 00:32:20.836733 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-13 00:32:20.836744 | orchestrator | Tuesday 13 January 2026 00:32:11 +0000 (0:00:01.344) 0:08:08.231 ******* 2026-01-13 00:32:20.836755 | orchestrator | changed: [testbed-manager] 2026-01-13 00:32:20.836765 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:32:20.836776 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:32:20.836786 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:32:20.836797 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:32:20.836807 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:32:20.836818 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:32:20.836828 | orchestrator | 2026-01-13 00:32:20.836839 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-13 00:32:20.836850 | orchestrator | Tuesday 13 January 2026 00:32:13 +0000 (0:00:01.861) 0:08:10.093 ******* 2026-01-13 00:32:20.836861 | orchestrator | changed: [testbed-manager] 2026-01-13 00:32:20.836871 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:32:20.836882 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:32:20.836892 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:32:20.836903 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:32:20.836913 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:32:20.836924 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:32:20.836934 | orchestrator | 2026-01-13 00:32:20.836945 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-13 00:32:20.836956 | orchestrator | Tuesday 13 January 2026 00:32:14 +0000 (0:00:01.425) 0:08:11.518 ******* 2026-01-13 00:32:20.836966 | orchestrator | changed: [testbed-manager] 2026-01-13 00:32:20.836977 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:32:20.836988 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:32:20.836998 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:32:20.837009 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:32:20.837020 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:32:20.837030 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:32:20.837041 | orchestrator | 2026-01-13 00:32:20.837052 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-13 00:32:20.837062 | orchestrator | 2026-01-13 00:32:20.837073 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-13 00:32:20.837084 | orchestrator | Tuesday 13 January 2026 00:32:15 +0000 (0:00:01.103) 0:08:12.621 ******* 2026-01-13 00:32:20.837095 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:32:20.837106 | orchestrator | 2026-01-13 00:32:20.837116 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-13 00:32:20.837127 | orchestrator | Tuesday 13 January 2026 00:32:16 +0000 (0:00:00.782) 0:08:13.404 ******* 2026-01-13 00:32:20.837157 | orchestrator | ok: [testbed-manager] 2026-01-13 00:32:20.837169 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:32:20.837180 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:32:20.837190 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:32:20.837208 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:32:20.837219 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:32:20.837229 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:32:20.837240 | orchestrator | 2026-01-13 00:32:20.837251 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-13 00:32:20.837262 | orchestrator | Tuesday 13 January 2026 00:32:17 +0000 (0:00:01.089) 0:08:14.494 ******* 2026-01-13 00:32:20.837273 | orchestrator | changed: [testbed-manager] 2026-01-13 00:32:20.837283 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:32:20.837294 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:32:20.837305 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:32:20.837316 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:32:20.837326 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:32:20.837343 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:32:20.837354 | orchestrator | 2026-01-13 00:32:20.837365 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-13 00:32:20.837376 | orchestrator | Tuesday 13 January 2026 00:32:18 +0000 (0:00:01.122) 0:08:15.616 ******* 2026-01-13 00:32:20.837387 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:32:20.837398 | orchestrator | 2026-01-13 00:32:20.837409 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-13 00:32:20.837419 | orchestrator | Tuesday 13 January 2026 00:32:19 +0000 (0:00:00.795) 0:08:16.412 ******* 2026-01-13 00:32:20.837430 | orchestrator | ok: [testbed-manager] 2026-01-13 00:32:20.837441 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:32:20.837451 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:32:20.837462 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:32:20.837472 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:32:20.837483 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:32:20.837493 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:32:20.837504 | orchestrator | 2026-01-13 00:32:20.837522 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-13 00:32:22.397459 | orchestrator | Tuesday 13 January 2026 00:32:20 +0000 (0:00:01.081) 0:08:17.494 ******* 2026-01-13 00:32:22.397567 | orchestrator | changed: [testbed-manager] 2026-01-13 00:32:22.397585 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:32:22.397600 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:32:22.397614 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:32:22.397627 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:32:22.397641 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:32:22.397654 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:32:22.397668 | orchestrator | 2026-01-13 00:32:22.397682 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:32:22.397696 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-13 00:32:22.397710 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-13 00:32:22.397722 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-13 00:32:22.397734 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-13 00:32:22.397746 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-13 00:32:22.397758 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-13 00:32:22.397769 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-13 00:32:22.397807 | orchestrator | 2026-01-13 00:32:22.397820 | orchestrator | 2026-01-13 00:32:22.397832 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:32:22.397845 | orchestrator | Tuesday 13 January 2026 00:32:21 +0000 (0:00:01.072) 0:08:18.566 ******* 2026-01-13 00:32:22.397857 | orchestrator | =============================================================================== 2026-01-13 00:32:22.397869 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.35s 2026-01-13 00:32:22.397881 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.02s 2026-01-13 00:32:22.397893 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.30s 2026-01-13 00:32:22.397905 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.14s 2026-01-13 00:32:22.397917 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.55s 2026-01-13 00:32:22.397929 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.70s 2026-01-13 00:32:22.397941 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.73s 2026-01-13 00:32:22.397953 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.40s 2026-01-13 00:32:22.397965 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.15s 2026-01-13 00:32:22.397979 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.64s 2026-01-13 00:32:22.397991 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.03s 2026-01-13 00:32:22.398003 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.84s 2026-01-13 00:32:22.398081 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.55s 2026-01-13 00:32:22.398097 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.54s 2026-01-13 00:32:22.398111 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.42s 2026-01-13 00:32:22.398126 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.17s 2026-01-13 00:32:22.398159 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 7.95s 2026-01-13 00:32:22.398174 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.06s 2026-01-13 00:32:22.398202 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.62s 2026-01-13 00:32:22.398216 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.55s 2026-01-13 00:32:22.702525 | orchestrator | + osism apply fail2ban 2026-01-13 00:32:35.474950 | orchestrator | 2026-01-13 00:32:35 | INFO  | Task 9e47c4f7-4ec2-4007-886b-032cbee6e968 (fail2ban) was prepared for execution. 2026-01-13 00:32:35.475055 | orchestrator | 2026-01-13 00:32:35 | INFO  | It takes a moment until task 9e47c4f7-4ec2-4007-886b-032cbee6e968 (fail2ban) has been started and output is visible here. 2026-01-13 00:32:57.283492 | orchestrator | 2026-01-13 00:32:57.283620 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-13 00:32:57.283646 | orchestrator | 2026-01-13 00:32:57.283666 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-13 00:32:57.283686 | orchestrator | Tuesday 13 January 2026 00:32:39 +0000 (0:00:00.275) 0:00:00.275 ******* 2026-01-13 00:32:57.283707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:32:57.283729 | orchestrator | 2026-01-13 00:32:57.283748 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-13 00:32:57.283767 | orchestrator | Tuesday 13 January 2026 00:32:40 +0000 (0:00:01.136) 0:00:01.411 ******* 2026-01-13 00:32:57.283786 | orchestrator | changed: [testbed-manager] 2026-01-13 00:32:57.283839 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:32:57.283857 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:32:57.283875 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:32:57.283893 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:32:57.283911 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:32:57.283929 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:32:57.283947 | orchestrator | 2026-01-13 00:32:57.283964 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-13 00:32:57.283982 | orchestrator | Tuesday 13 January 2026 00:32:52 +0000 (0:00:11.264) 0:00:12.676 ******* 2026-01-13 00:32:57.284001 | orchestrator | changed: [testbed-manager] 2026-01-13 00:32:57.284019 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:32:57.284038 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:32:57.284056 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:32:57.284074 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:32:57.284092 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:32:57.284154 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:32:57.284175 | orchestrator | 2026-01-13 00:32:57.284193 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-13 00:32:57.284212 | orchestrator | Tuesday 13 January 2026 00:32:53 +0000 (0:00:01.506) 0:00:14.183 ******* 2026-01-13 00:32:57.284231 | orchestrator | ok: [testbed-manager] 2026-01-13 00:32:57.284250 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:32:57.284267 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:32:57.284285 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:32:57.284302 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:32:57.284318 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:32:57.284335 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:32:57.284352 | orchestrator | 2026-01-13 00:32:57.284370 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-13 00:32:57.284388 | orchestrator | Tuesday 13 January 2026 00:32:55 +0000 (0:00:01.581) 0:00:15.764 ******* 2026-01-13 00:32:57.284407 | orchestrator | changed: [testbed-manager] 2026-01-13 00:32:57.284425 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:32:57.284442 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:32:57.284460 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:32:57.284478 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:32:57.284497 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:32:57.284514 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:32:57.284534 | orchestrator | 2026-01-13 00:32:57.284552 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:32:57.284570 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:32:57.284588 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:32:57.284605 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:32:57.284622 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:32:57.284663 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:32:57.284684 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:32:57.284718 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:32:57.284737 | orchestrator | 2026-01-13 00:32:57.284756 | orchestrator | 2026-01-13 00:32:57.284776 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:32:57.284816 | orchestrator | Tuesday 13 January 2026 00:32:56 +0000 (0:00:01.654) 0:00:17.419 ******* 2026-01-13 00:32:57.284836 | orchestrator | =============================================================================== 2026-01-13 00:32:57.284856 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.26s 2026-01-13 00:32:57.284893 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.65s 2026-01-13 00:32:57.284905 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.58s 2026-01-13 00:32:57.284916 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.51s 2026-01-13 00:32:57.284927 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.14s 2026-01-13 00:32:57.565636 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-13 00:32:57.565735 | orchestrator | + osism apply network 2026-01-13 00:33:09.743489 | orchestrator | 2026-01-13 00:33:09 | INFO  | Task 08e96ea9-e9e0-489d-99af-6eaeeff5e332 (network) was prepared for execution. 2026-01-13 00:33:09.743598 | orchestrator | 2026-01-13 00:33:09 | INFO  | It takes a moment until task 08e96ea9-e9e0-489d-99af-6eaeeff5e332 (network) has been started and output is visible here. 2026-01-13 00:33:37.221689 | orchestrator | 2026-01-13 00:33:37.221801 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-13 00:33:37.221819 | orchestrator | 2026-01-13 00:33:37.221832 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-13 00:33:37.221844 | orchestrator | Tuesday 13 January 2026 00:33:13 +0000 (0:00:00.255) 0:00:00.255 ******* 2026-01-13 00:33:37.221856 | orchestrator | ok: [testbed-manager] 2026-01-13 00:33:37.221869 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:33:37.221880 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:33:37.221891 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:33:37.221903 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:33:37.221914 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:33:37.221925 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:33:37.221936 | orchestrator | 2026-01-13 00:33:37.221948 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-13 00:33:37.221959 | orchestrator | Tuesday 13 January 2026 00:33:14 +0000 (0:00:00.568) 0:00:00.823 ******* 2026-01-13 00:33:37.221972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:33:37.221986 | orchestrator | 2026-01-13 00:33:37.221997 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-13 00:33:37.222009 | orchestrator | Tuesday 13 January 2026 00:33:15 +0000 (0:00:01.005) 0:00:01.828 ******* 2026-01-13 00:33:37.222130 | orchestrator | ok: [testbed-manager] 2026-01-13 00:33:37.222143 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:33:37.222154 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:33:37.222164 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:33:37.222175 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:33:37.222186 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:33:37.222203 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:33:37.222222 | orchestrator | 2026-01-13 00:33:37.222239 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-13 00:33:37.222257 | orchestrator | Tuesday 13 January 2026 00:33:17 +0000 (0:00:02.040) 0:00:03.869 ******* 2026-01-13 00:33:37.222276 | orchestrator | ok: [testbed-manager] 2026-01-13 00:33:37.222294 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:33:37.222312 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:33:37.222331 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:33:37.222350 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:33:37.222365 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:33:37.222376 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:33:37.222387 | orchestrator | 2026-01-13 00:33:37.222398 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-13 00:33:37.222440 | orchestrator | Tuesday 13 January 2026 00:33:19 +0000 (0:00:01.735) 0:00:05.605 ******* 2026-01-13 00:33:37.222451 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-13 00:33:37.222463 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-13 00:33:37.222473 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-13 00:33:37.222484 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-13 00:33:37.222495 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-13 00:33:37.222506 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-13 00:33:37.222516 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-13 00:33:37.222527 | orchestrator | 2026-01-13 00:33:37.222538 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-13 00:33:37.222549 | orchestrator | Tuesday 13 January 2026 00:33:20 +0000 (0:00:00.957) 0:00:06.562 ******* 2026-01-13 00:33:37.222560 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-13 00:33:37.222571 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-13 00:33:37.222582 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 00:33:37.222592 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-13 00:33:37.222603 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 00:33:37.222613 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-13 00:33:37.222624 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-13 00:33:37.222634 | orchestrator | 2026-01-13 00:33:37.222645 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-13 00:33:37.222656 | orchestrator | Tuesday 13 January 2026 00:33:23 +0000 (0:00:03.244) 0:00:09.806 ******* 2026-01-13 00:33:37.222667 | orchestrator | changed: [testbed-manager] 2026-01-13 00:33:37.222677 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:33:37.222688 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:33:37.222698 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:33:37.222709 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:33:37.222720 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:33:37.222730 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:33:37.222741 | orchestrator | 2026-01-13 00:33:37.222752 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-13 00:33:37.222762 | orchestrator | Tuesday 13 January 2026 00:33:24 +0000 (0:00:01.552) 0:00:11.359 ******* 2026-01-13 00:33:37.222773 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 00:33:37.222784 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-13 00:33:37.222794 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 00:33:37.222805 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-13 00:33:37.222816 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-13 00:33:37.222826 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-13 00:33:37.222837 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-13 00:33:37.222848 | orchestrator | 2026-01-13 00:33:37.222858 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-13 00:33:37.222869 | orchestrator | Tuesday 13 January 2026 00:33:26 +0000 (0:00:01.614) 0:00:12.974 ******* 2026-01-13 00:33:37.222880 | orchestrator | ok: [testbed-manager] 2026-01-13 00:33:37.222890 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:33:37.222923 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:33:37.222943 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:33:37.222960 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:33:37.222978 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:33:37.222995 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:33:37.223013 | orchestrator | 2026-01-13 00:33:37.223033 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-13 00:33:37.223136 | orchestrator | Tuesday 13 January 2026 00:33:27 +0000 (0:00:01.114) 0:00:14.088 ******* 2026-01-13 00:33:37.223161 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:33:37.223181 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:33:37.223201 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:33:37.223235 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:33:37.223255 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:33:37.223273 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:33:37.223292 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:33:37.223309 | orchestrator | 2026-01-13 00:33:37.223326 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-13 00:33:37.223341 | orchestrator | Tuesday 13 January 2026 00:33:28 +0000 (0:00:00.648) 0:00:14.737 ******* 2026-01-13 00:33:37.223358 | orchestrator | ok: [testbed-manager] 2026-01-13 00:33:37.223374 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:33:37.223393 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:33:37.223424 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:33:37.223456 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:33:37.223489 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:33:37.223521 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:33:37.223554 | orchestrator | 2026-01-13 00:33:37.223588 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-13 00:33:37.223622 | orchestrator | Tuesday 13 January 2026 00:33:30 +0000 (0:00:02.203) 0:00:16.940 ******* 2026-01-13 00:33:37.223655 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:33:37.223690 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:33:37.223722 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:33:37.223756 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:33:37.223791 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:33:37.223823 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:33:37.223860 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-13 00:33:37.223896 | orchestrator | 2026-01-13 00:33:37.223931 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-13 00:33:37.223967 | orchestrator | Tuesday 13 January 2026 00:33:31 +0000 (0:00:00.902) 0:00:17.842 ******* 2026-01-13 00:33:37.224002 | orchestrator | ok: [testbed-manager] 2026-01-13 00:33:37.224036 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:33:37.224102 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:33:37.224143 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:33:37.224177 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:33:37.224211 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:33:37.224245 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:33:37.224278 | orchestrator | 2026-01-13 00:33:37.224312 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-13 00:33:37.224348 | orchestrator | Tuesday 13 January 2026 00:33:33 +0000 (0:00:01.627) 0:00:19.470 ******* 2026-01-13 00:33:37.224384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:33:37.224420 | orchestrator | 2026-01-13 00:33:37.224456 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-13 00:33:37.224490 | orchestrator | Tuesday 13 January 2026 00:33:34 +0000 (0:00:01.193) 0:00:20.664 ******* 2026-01-13 00:33:37.224526 | orchestrator | ok: [testbed-manager] 2026-01-13 00:33:37.224550 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:33:37.224567 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:33:37.224584 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:33:37.224601 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:33:37.224620 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:33:37.224636 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:33:37.224652 | orchestrator | 2026-01-13 00:33:37.224671 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-13 00:33:37.224689 | orchestrator | Tuesday 13 January 2026 00:33:35 +0000 (0:00:01.104) 0:00:21.769 ******* 2026-01-13 00:33:37.224708 | orchestrator | ok: [testbed-manager] 2026-01-13 00:33:37.224727 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:33:37.224745 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:33:37.224778 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:33:37.224789 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:33:37.224800 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:33:37.224810 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:33:37.224821 | orchestrator | 2026-01-13 00:33:37.224831 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-13 00:33:37.224842 | orchestrator | Tuesday 13 January 2026 00:33:36 +0000 (0:00:00.626) 0:00:22.395 ******* 2026-01-13 00:33:37.224853 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-13 00:33:37.224864 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-13 00:33:37.224875 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-13 00:33:37.224885 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-13 00:33:37.224896 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-13 00:33:37.224918 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-13 00:33:37.224929 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-13 00:33:37.224940 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-13 00:33:37.224950 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-13 00:33:37.224961 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-13 00:33:37.224971 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-13 00:33:37.224982 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-13 00:33:37.224993 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-13 00:33:37.225004 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-13 00:33:37.225014 | orchestrator | 2026-01-13 00:33:37.225045 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-13 00:33:52.490182 | orchestrator | Tuesday 13 January 2026 00:33:37 +0000 (0:00:01.192) 0:00:23.588 ******* 2026-01-13 00:33:52.490296 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:33:52.490314 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:33:52.490326 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:33:52.490337 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:33:52.490348 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:33:52.490359 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:33:52.490369 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:33:52.490381 | orchestrator | 2026-01-13 00:33:52.490394 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-13 00:33:52.490405 | orchestrator | Tuesday 13 January 2026 00:33:37 +0000 (0:00:00.586) 0:00:24.175 ******* 2026-01-13 00:33:52.490418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-1, testbed-node-5, testbed-node-4 2026-01-13 00:33:52.490432 | orchestrator | 2026-01-13 00:33:52.490443 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-13 00:33:52.490454 | orchestrator | Tuesday 13 January 2026 00:33:42 +0000 (0:00:04.307) 0:00:28.482 ******* 2026-01-13 00:33:52.490466 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-13 00:33:52.490480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-13 00:33:52.490491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-13 00:33:52.490541 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-13 00:33:52.490559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-13 00:33:52.490577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-13 00:33:52.490597 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-13 00:33:52.490617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-13 00:33:52.490636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-13 00:33:52.490680 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-13 00:33:52.490703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-13 00:33:52.490750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-13 00:33:52.490765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-13 00:33:52.490778 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-13 00:33:52.490791 | orchestrator | 2026-01-13 00:33:52.490805 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-13 00:33:52.490817 | orchestrator | Tuesday 13 January 2026 00:33:47 +0000 (0:00:05.341) 0:00:33.824 ******* 2026-01-13 00:33:52.490830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-13 00:33:52.490860 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-13 00:33:52.490879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-13 00:33:52.490901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-13 00:33:52.490922 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-13 00:33:52.490941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-13 00:33:52.490953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-13 00:33:52.490964 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-13 00:33:52.490975 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-13 00:33:52.490991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-13 00:33:52.491003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-13 00:33:52.491014 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-13 00:33:52.491033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-13 00:34:04.794299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-13 00:34:04.794392 | orchestrator | 2026-01-13 00:34:04.794404 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-13 00:34:04.794413 | orchestrator | Tuesday 13 January 2026 00:33:52 +0000 (0:00:05.033) 0:00:38.858 ******* 2026-01-13 00:34:04.794443 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:34:04.794451 | orchestrator | 2026-01-13 00:34:04.794458 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-13 00:34:04.794465 | orchestrator | Tuesday 13 January 2026 00:33:53 +0000 (0:00:01.071) 0:00:39.930 ******* 2026-01-13 00:34:04.794472 | orchestrator | ok: [testbed-manager] 2026-01-13 00:34:04.794480 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:34:04.794486 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:34:04.794493 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:34:04.794500 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:34:04.794507 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:34:04.794513 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:34:04.794520 | orchestrator | 2026-01-13 00:34:04.794527 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-13 00:34:04.794534 | orchestrator | Tuesday 13 January 2026 00:33:54 +0000 (0:00:01.010) 0:00:40.941 ******* 2026-01-13 00:34:04.794541 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-13 00:34:04.794548 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-13 00:34:04.794555 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-13 00:34:04.794562 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-13 00:34:04.794569 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:34:04.794576 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-13 00:34:04.794583 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-13 00:34:04.794590 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-13 00:34:04.794596 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-13 00:34:04.794603 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:34:04.794610 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-13 00:34:04.794617 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-13 00:34:04.794623 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-13 00:34:04.794630 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-13 00:34:04.794637 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:34:04.794644 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-13 00:34:04.794650 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-13 00:34:04.794657 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-13 00:34:04.794664 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-13 00:34:04.794671 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:34:04.794677 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-13 00:34:04.794684 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-13 00:34:04.794691 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-13 00:34:04.794697 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-13 00:34:04.794704 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:34:04.794723 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-13 00:34:04.794730 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-13 00:34:04.794743 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-13 00:34:04.794750 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-13 00:34:04.794757 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:34:04.794764 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-13 00:34:04.794770 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-13 00:34:04.794777 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-13 00:34:04.794784 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-13 00:34:04.794791 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:34:04.794797 | orchestrator | 2026-01-13 00:34:04.794804 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-13 00:34:04.794822 | orchestrator | Tuesday 13 January 2026 00:33:55 +0000 (0:00:00.783) 0:00:41.724 ******* 2026-01-13 00:34:04.794829 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:34:04.794836 | orchestrator | 2026-01-13 00:34:04.794843 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-13 00:34:04.794851 | orchestrator | Tuesday 13 January 2026 00:33:56 +0000 (0:00:01.204) 0:00:42.928 ******* 2026-01-13 00:34:04.794858 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:34:04.794867 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:34:04.794874 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:34:04.794882 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:34:04.794890 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:34:04.794897 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:34:04.794905 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:34:04.794913 | orchestrator | 2026-01-13 00:34:04.794920 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-13 00:34:04.794928 | orchestrator | Tuesday 13 January 2026 00:33:57 +0000 (0:00:00.632) 0:00:43.561 ******* 2026-01-13 00:34:04.794936 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:34:04.794944 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:34:04.794952 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:34:04.794959 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:34:04.794967 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:34:04.794975 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:34:04.794983 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:34:04.794991 | orchestrator | 2026-01-13 00:34:04.794999 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-13 00:34:04.795006 | orchestrator | Tuesday 13 January 2026 00:33:57 +0000 (0:00:00.740) 0:00:44.302 ******* 2026-01-13 00:34:04.795014 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:34:04.795022 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:34:04.795030 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:34:04.795062 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:34:04.795070 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:34:04.795078 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:34:04.795085 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:34:04.795093 | orchestrator | 2026-01-13 00:34:04.795100 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-13 00:34:04.795108 | orchestrator | Tuesday 13 January 2026 00:33:58 +0000 (0:00:00.585) 0:00:44.887 ******* 2026-01-13 00:34:04.795116 | orchestrator | ok: [testbed-manager] 2026-01-13 00:34:04.795123 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:34:04.795131 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:34:04.795139 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:34:04.795146 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:34:04.795160 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:34:04.795168 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:34:04.795175 | orchestrator | 2026-01-13 00:34:04.795183 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-13 00:34:04.795191 | orchestrator | Tuesday 13 January 2026 00:34:00 +0000 (0:00:01.753) 0:00:46.641 ******* 2026-01-13 00:34:04.795199 | orchestrator | ok: [testbed-manager] 2026-01-13 00:34:04.795207 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:34:04.795214 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:34:04.795220 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:34:04.795227 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:34:04.795233 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:34:04.795240 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:34:04.795247 | orchestrator | 2026-01-13 00:34:04.795253 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-13 00:34:04.795260 | orchestrator | Tuesday 13 January 2026 00:34:01 +0000 (0:00:00.964) 0:00:47.606 ******* 2026-01-13 00:34:04.795267 | orchestrator | ok: [testbed-manager] 2026-01-13 00:34:04.795273 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:34:04.795280 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:34:04.795286 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:34:04.795293 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:34:04.795299 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:34:04.795306 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:34:04.795312 | orchestrator | 2026-01-13 00:34:04.795319 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-13 00:34:04.795326 | orchestrator | Tuesday 13 January 2026 00:34:03 +0000 (0:00:02.221) 0:00:49.827 ******* 2026-01-13 00:34:04.795332 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:34:04.795339 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:34:04.795346 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:34:04.795352 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:34:04.795359 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:34:04.795365 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:34:04.795372 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:34:04.795378 | orchestrator | 2026-01-13 00:34:04.795388 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-13 00:34:04.795395 | orchestrator | Tuesday 13 January 2026 00:34:04 +0000 (0:00:00.808) 0:00:50.635 ******* 2026-01-13 00:34:04.795402 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:34:04.795409 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:34:04.795415 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:34:04.795422 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:34:04.795428 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:34:04.795435 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:34:04.795441 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:34:04.795448 | orchestrator | 2026-01-13 00:34:04.795455 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:34:04.795462 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-13 00:34:04.795471 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-13 00:34:04.795482 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-13 00:34:05.133930 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-13 00:34:05.134103 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-13 00:34:05.134119 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-13 00:34:05.134894 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-13 00:34:05.134911 | orchestrator | 2026-01-13 00:34:05.134924 | orchestrator | 2026-01-13 00:34:05.134936 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:34:05.134949 | orchestrator | Tuesday 13 January 2026 00:34:04 +0000 (0:00:00.532) 0:00:51.168 ******* 2026-01-13 00:34:05.134960 | orchestrator | =============================================================================== 2026-01-13 00:34:05.134971 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.34s 2026-01-13 00:34:05.134982 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.03s 2026-01-13 00:34:05.134993 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.31s 2026-01-13 00:34:05.135004 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.24s 2026-01-13 00:34:05.135015 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.22s 2026-01-13 00:34:05.135025 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.20s 2026-01-13 00:34:05.135036 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.04s 2026-01-13 00:34:05.135062 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.75s 2026-01-13 00:34:05.135073 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.74s 2026-01-13 00:34:05.135083 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.63s 2026-01-13 00:34:05.135094 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.61s 2026-01-13 00:34:05.135105 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.55s 2026-01-13 00:34:05.135116 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.20s 2026-01-13 00:34:05.135127 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.19s 2026-01-13 00:34:05.135137 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.19s 2026-01-13 00:34:05.135148 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.11s 2026-01-13 00:34:05.135159 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.10s 2026-01-13 00:34:05.135170 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.07s 2026-01-13 00:34:05.135180 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.01s 2026-01-13 00:34:05.135191 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.01s 2026-01-13 00:34:05.403189 | orchestrator | + osism apply wireguard 2026-01-13 00:34:17.438133 | orchestrator | 2026-01-13 00:34:17 | INFO  | Task 8a352e9e-84ed-40db-bcd1-dca8ac701d87 (wireguard) was prepared for execution. 2026-01-13 00:34:17.438242 | orchestrator | 2026-01-13 00:34:17 | INFO  | It takes a moment until task 8a352e9e-84ed-40db-bcd1-dca8ac701d87 (wireguard) has been started and output is visible here. 2026-01-13 00:34:36.841498 | orchestrator | 2026-01-13 00:34:36.841641 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-13 00:34:36.841670 | orchestrator | 2026-01-13 00:34:36.841692 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-13 00:34:36.841712 | orchestrator | Tuesday 13 January 2026 00:34:21 +0000 (0:00:00.163) 0:00:00.163 ******* 2026-01-13 00:34:36.841732 | orchestrator | ok: [testbed-manager] 2026-01-13 00:34:36.841753 | orchestrator | 2026-01-13 00:34:36.841774 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-13 00:34:36.841800 | orchestrator | Tuesday 13 January 2026 00:34:22 +0000 (0:00:01.303) 0:00:01.467 ******* 2026-01-13 00:34:36.841820 | orchestrator | changed: [testbed-manager] 2026-01-13 00:34:36.841840 | orchestrator | 2026-01-13 00:34:36.841897 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-13 00:34:36.841910 | orchestrator | Tuesday 13 January 2026 00:34:29 +0000 (0:00:06.612) 0:00:08.080 ******* 2026-01-13 00:34:36.841921 | orchestrator | changed: [testbed-manager] 2026-01-13 00:34:36.841932 | orchestrator | 2026-01-13 00:34:36.841943 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-13 00:34:36.841953 | orchestrator | Tuesday 13 January 2026 00:34:29 +0000 (0:00:00.561) 0:00:08.641 ******* 2026-01-13 00:34:36.841964 | orchestrator | changed: [testbed-manager] 2026-01-13 00:34:36.841974 | orchestrator | 2026-01-13 00:34:36.841985 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-13 00:34:36.841996 | orchestrator | Tuesday 13 January 2026 00:34:30 +0000 (0:00:00.428) 0:00:09.070 ******* 2026-01-13 00:34:36.842139 | orchestrator | ok: [testbed-manager] 2026-01-13 00:34:36.842163 | orchestrator | 2026-01-13 00:34:36.842186 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-13 00:34:36.842206 | orchestrator | Tuesday 13 January 2026 00:34:31 +0000 (0:00:00.684) 0:00:09.754 ******* 2026-01-13 00:34:36.842224 | orchestrator | ok: [testbed-manager] 2026-01-13 00:34:36.842235 | orchestrator | 2026-01-13 00:34:36.842246 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-13 00:34:36.842256 | orchestrator | Tuesday 13 January 2026 00:34:31 +0000 (0:00:00.397) 0:00:10.151 ******* 2026-01-13 00:34:36.842267 | orchestrator | ok: [testbed-manager] 2026-01-13 00:34:36.842278 | orchestrator | 2026-01-13 00:34:36.842288 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-13 00:34:36.842299 | orchestrator | Tuesday 13 January 2026 00:34:31 +0000 (0:00:00.405) 0:00:10.557 ******* 2026-01-13 00:34:36.842310 | orchestrator | changed: [testbed-manager] 2026-01-13 00:34:36.842320 | orchestrator | 2026-01-13 00:34:36.842331 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-13 00:34:36.842342 | orchestrator | Tuesday 13 January 2026 00:34:33 +0000 (0:00:01.162) 0:00:11.719 ******* 2026-01-13 00:34:36.842352 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-13 00:34:36.842363 | orchestrator | changed: [testbed-manager] 2026-01-13 00:34:36.842373 | orchestrator | 2026-01-13 00:34:36.842384 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-13 00:34:36.842395 | orchestrator | Tuesday 13 January 2026 00:34:33 +0000 (0:00:00.954) 0:00:12.674 ******* 2026-01-13 00:34:36.842406 | orchestrator | changed: [testbed-manager] 2026-01-13 00:34:36.842418 | orchestrator | 2026-01-13 00:34:36.842437 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-13 00:34:36.842456 | orchestrator | Tuesday 13 January 2026 00:34:35 +0000 (0:00:01.621) 0:00:14.296 ******* 2026-01-13 00:34:36.842473 | orchestrator | changed: [testbed-manager] 2026-01-13 00:34:36.842492 | orchestrator | 2026-01-13 00:34:36.842512 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:34:36.842530 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:34:36.842547 | orchestrator | 2026-01-13 00:34:36.842559 | orchestrator | 2026-01-13 00:34:36.842570 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:34:36.842580 | orchestrator | Tuesday 13 January 2026 00:34:36 +0000 (0:00:00.899) 0:00:15.195 ******* 2026-01-13 00:34:36.842591 | orchestrator | =============================================================================== 2026-01-13 00:34:36.842620 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.61s 2026-01-13 00:34:36.842631 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.62s 2026-01-13 00:34:36.842642 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.30s 2026-01-13 00:34:36.842653 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.16s 2026-01-13 00:34:36.842664 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.95s 2026-01-13 00:34:36.842686 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2026-01-13 00:34:36.842697 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.68s 2026-01-13 00:34:36.842708 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2026-01-13 00:34:36.842718 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2026-01-13 00:34:36.842729 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-01-13 00:34:36.842740 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2026-01-13 00:34:37.098995 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-13 00:34:37.133901 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-13 00:34:37.133993 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-13 00:34:37.209690 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 197 0 --:--:-- --:--:-- --:--:-- 200 2026-01-13 00:34:37.225243 | orchestrator | + osism apply --environment custom workarounds 2026-01-13 00:34:39.129450 | orchestrator | 2026-01-13 00:34:39 | INFO  | Trying to run play workarounds in environment custom 2026-01-13 00:34:49.296676 | orchestrator | 2026-01-13 00:34:49 | INFO  | Task 8323183f-4a7a-455e-bd47-9ad302c4b40a (workarounds) was prepared for execution. 2026-01-13 00:34:49.296838 | orchestrator | 2026-01-13 00:34:49 | INFO  | It takes a moment until task 8323183f-4a7a-455e-bd47-9ad302c4b40a (workarounds) has been started and output is visible here. 2026-01-13 00:35:14.002531 | orchestrator | 2026-01-13 00:35:14.002667 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 00:35:14.002689 | orchestrator | 2026-01-13 00:35:14.002702 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-13 00:35:14.002713 | orchestrator | Tuesday 13 January 2026 00:34:53 +0000 (0:00:00.125) 0:00:00.125 ******* 2026-01-13 00:35:14.002725 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-13 00:35:14.002736 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-13 00:35:14.002747 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-13 00:35:14.002759 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-13 00:35:14.002770 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-13 00:35:14.002780 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-13 00:35:14.002791 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-13 00:35:14.002802 | orchestrator | 2026-01-13 00:35:14.002813 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-13 00:35:14.002824 | orchestrator | 2026-01-13 00:35:14.002835 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-13 00:35:14.002846 | orchestrator | Tuesday 13 January 2026 00:34:54 +0000 (0:00:00.774) 0:00:00.899 ******* 2026-01-13 00:35:14.002857 | orchestrator | ok: [testbed-manager] 2026-01-13 00:35:14.002869 | orchestrator | 2026-01-13 00:35:14.002880 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-13 00:35:14.002891 | orchestrator | 2026-01-13 00:35:14.002902 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-13 00:35:14.002913 | orchestrator | Tuesday 13 January 2026 00:34:56 +0000 (0:00:02.139) 0:00:03.039 ******* 2026-01-13 00:35:14.002924 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:35:14.002936 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:35:14.002947 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:35:14.003037 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:35:14.003049 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:35:14.003060 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:35:14.003091 | orchestrator | 2026-01-13 00:35:14.003103 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-13 00:35:14.003113 | orchestrator | 2026-01-13 00:35:14.003124 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-13 00:35:14.003135 | orchestrator | Tuesday 13 January 2026 00:34:58 +0000 (0:00:01.996) 0:00:05.036 ******* 2026-01-13 00:35:14.003147 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-13 00:35:14.003159 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-13 00:35:14.003169 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-13 00:35:14.003180 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-13 00:35:14.003191 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-13 00:35:14.003202 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-13 00:35:14.003212 | orchestrator | 2026-01-13 00:35:14.003223 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-13 00:35:14.003234 | orchestrator | Tuesday 13 January 2026 00:34:59 +0000 (0:00:01.513) 0:00:06.549 ******* 2026-01-13 00:35:14.003245 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:35:14.003256 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:35:14.003267 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:35:14.003278 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:35:14.003288 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:35:14.003299 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:35:14.003309 | orchestrator | 2026-01-13 00:35:14.003320 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-13 00:35:14.003332 | orchestrator | Tuesday 13 January 2026 00:35:03 +0000 (0:00:04.052) 0:00:10.602 ******* 2026-01-13 00:35:14.003342 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:35:14.003353 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:35:14.003364 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:35:14.003375 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:35:14.003386 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:35:14.003396 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:35:14.003407 | orchestrator | 2026-01-13 00:35:14.003418 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-13 00:35:14.003429 | orchestrator | 2026-01-13 00:35:14.003439 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-13 00:35:14.003450 | orchestrator | Tuesday 13 January 2026 00:35:04 +0000 (0:00:00.632) 0:00:11.234 ******* 2026-01-13 00:35:14.003461 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:35:14.003472 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:35:14.003483 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:35:14.003493 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:35:14.003504 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:35:14.003515 | orchestrator | changed: [testbed-manager] 2026-01-13 00:35:14.003525 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:35:14.003536 | orchestrator | 2026-01-13 00:35:14.003547 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-13 00:35:14.003558 | orchestrator | Tuesday 13 January 2026 00:35:05 +0000 (0:00:01.459) 0:00:12.694 ******* 2026-01-13 00:35:14.003569 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:35:14.003579 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:35:14.003590 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:35:14.003609 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:35:14.003620 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:35:14.003631 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:35:14.003659 | orchestrator | changed: [testbed-manager] 2026-01-13 00:35:14.003678 | orchestrator | 2026-01-13 00:35:14.003689 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-13 00:35:14.003700 | orchestrator | Tuesday 13 January 2026 00:35:07 +0000 (0:00:01.475) 0:00:14.169 ******* 2026-01-13 00:35:14.003710 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:35:14.003721 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:35:14.003732 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:35:14.003743 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:35:14.003754 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:35:14.003764 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:35:14.003775 | orchestrator | ok: [testbed-manager] 2026-01-13 00:35:14.003786 | orchestrator | 2026-01-13 00:35:14.003797 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-13 00:35:14.003808 | orchestrator | Tuesday 13 January 2026 00:35:08 +0000 (0:00:01.490) 0:00:15.659 ******* 2026-01-13 00:35:14.003818 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:35:14.003829 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:35:14.003840 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:35:14.003851 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:35:14.003861 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:35:14.003872 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:35:14.003883 | orchestrator | changed: [testbed-manager] 2026-01-13 00:35:14.003894 | orchestrator | 2026-01-13 00:35:14.003904 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-13 00:35:14.003915 | orchestrator | Tuesday 13 January 2026 00:35:10 +0000 (0:00:01.708) 0:00:17.368 ******* 2026-01-13 00:35:14.003926 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:35:14.003937 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:35:14.003948 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:35:14.003977 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:35:14.003988 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:35:14.003998 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:35:14.004009 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:35:14.004019 | orchestrator | 2026-01-13 00:35:14.004030 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-13 00:35:14.004041 | orchestrator | 2026-01-13 00:35:14.004051 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-13 00:35:14.004062 | orchestrator | Tuesday 13 January 2026 00:35:11 +0000 (0:00:00.613) 0:00:17.982 ******* 2026-01-13 00:35:14.004073 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:35:14.004084 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:35:14.004094 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:35:14.004105 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:35:14.004116 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:35:14.004127 | orchestrator | ok: [testbed-manager] 2026-01-13 00:35:14.004137 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:35:14.004148 | orchestrator | 2026-01-13 00:35:14.004159 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:35:14.004170 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:35:14.004182 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:35:14.004193 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:35:14.004204 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:35:14.004215 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:35:14.004226 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:35:14.004244 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:35:14.004255 | orchestrator | 2026-01-13 00:35:14.004265 | orchestrator | 2026-01-13 00:35:14.004277 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:35:14.004287 | orchestrator | Tuesday 13 January 2026 00:35:13 +0000 (0:00:02.735) 0:00:20.717 ******* 2026-01-13 00:35:14.004298 | orchestrator | =============================================================================== 2026-01-13 00:35:14.004309 | orchestrator | Run update-ca-certificates ---------------------------------------------- 4.05s 2026-01-13 00:35:14.004319 | orchestrator | Install python3-docker -------------------------------------------------- 2.74s 2026-01-13 00:35:14.004330 | orchestrator | Apply netplan configuration --------------------------------------------- 2.14s 2026-01-13 00:35:14.004341 | orchestrator | Apply netplan configuration --------------------------------------------- 2.00s 2026-01-13 00:35:14.004352 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.71s 2026-01-13 00:35:14.004362 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.51s 2026-01-13 00:35:14.004373 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2026-01-13 00:35:14.004383 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.48s 2026-01-13 00:35:14.004394 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.46s 2026-01-13 00:35:14.004409 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2026-01-13 00:35:14.004420 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.63s 2026-01-13 00:35:14.004438 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2026-01-13 00:35:14.644250 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-13 00:35:26.661008 | orchestrator | 2026-01-13 00:35:26 | INFO  | Task dd3467b7-2d05-49ce-b4ed-d7ed65f1d10f (reboot) was prepared for execution. 2026-01-13 00:35:26.661147 | orchestrator | 2026-01-13 00:35:26 | INFO  | It takes a moment until task dd3467b7-2d05-49ce-b4ed-d7ed65f1d10f (reboot) has been started and output is visible here. 2026-01-13 00:35:36.439416 | orchestrator | 2026-01-13 00:35:36.439507 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-13 00:35:36.439515 | orchestrator | 2026-01-13 00:35:36.439520 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-13 00:35:36.439524 | orchestrator | Tuesday 13 January 2026 00:35:30 +0000 (0:00:00.150) 0:00:00.150 ******* 2026-01-13 00:35:36.439528 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:35:36.439533 | orchestrator | 2026-01-13 00:35:36.439537 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-13 00:35:36.439541 | orchestrator | Tuesday 13 January 2026 00:35:30 +0000 (0:00:00.076) 0:00:00.227 ******* 2026-01-13 00:35:36.439546 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:35:36.439549 | orchestrator | 2026-01-13 00:35:36.439553 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-13 00:35:36.439557 | orchestrator | Tuesday 13 January 2026 00:35:31 +0000 (0:00:00.918) 0:00:01.145 ******* 2026-01-13 00:35:36.439561 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:35:36.439565 | orchestrator | 2026-01-13 00:35:36.439568 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-13 00:35:36.439572 | orchestrator | 2026-01-13 00:35:36.439576 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-13 00:35:36.439580 | orchestrator | Tuesday 13 January 2026 00:35:31 +0000 (0:00:00.093) 0:00:01.238 ******* 2026-01-13 00:35:36.439584 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:35:36.439587 | orchestrator | 2026-01-13 00:35:36.439607 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-13 00:35:36.439611 | orchestrator | Tuesday 13 January 2026 00:35:31 +0000 (0:00:00.084) 0:00:01.323 ******* 2026-01-13 00:35:36.439615 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:35:36.439618 | orchestrator | 2026-01-13 00:35:36.439622 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-13 00:35:36.439626 | orchestrator | Tuesday 13 January 2026 00:35:32 +0000 (0:00:00.660) 0:00:01.983 ******* 2026-01-13 00:35:36.439630 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:35:36.439633 | orchestrator | 2026-01-13 00:35:36.439637 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-13 00:35:36.439641 | orchestrator | 2026-01-13 00:35:36.439645 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-13 00:35:36.439648 | orchestrator | Tuesday 13 January 2026 00:35:32 +0000 (0:00:00.096) 0:00:02.079 ******* 2026-01-13 00:35:36.439652 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:35:36.439656 | orchestrator | 2026-01-13 00:35:36.439660 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-13 00:35:36.439664 | orchestrator | Tuesday 13 January 2026 00:35:32 +0000 (0:00:00.142) 0:00:02.221 ******* 2026-01-13 00:35:36.439668 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:35:36.439671 | orchestrator | 2026-01-13 00:35:36.439675 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-13 00:35:36.439679 | orchestrator | Tuesday 13 January 2026 00:35:33 +0000 (0:00:00.645) 0:00:02.867 ******* 2026-01-13 00:35:36.439683 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:35:36.439686 | orchestrator | 2026-01-13 00:35:36.439690 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-13 00:35:36.439694 | orchestrator | 2026-01-13 00:35:36.439698 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-13 00:35:36.439701 | orchestrator | Tuesday 13 January 2026 00:35:33 +0000 (0:00:00.090) 0:00:02.958 ******* 2026-01-13 00:35:36.439705 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:35:36.439709 | orchestrator | 2026-01-13 00:35:36.439712 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-13 00:35:36.439716 | orchestrator | Tuesday 13 January 2026 00:35:33 +0000 (0:00:00.078) 0:00:03.037 ******* 2026-01-13 00:35:36.439720 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:35:36.439724 | orchestrator | 2026-01-13 00:35:36.439727 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-13 00:35:36.439731 | orchestrator | Tuesday 13 January 2026 00:35:34 +0000 (0:00:00.653) 0:00:03.690 ******* 2026-01-13 00:35:36.439735 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:35:36.439739 | orchestrator | 2026-01-13 00:35:36.439742 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-13 00:35:36.439746 | orchestrator | 2026-01-13 00:35:36.439750 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-13 00:35:36.439754 | orchestrator | Tuesday 13 January 2026 00:35:34 +0000 (0:00:00.109) 0:00:03.799 ******* 2026-01-13 00:35:36.439757 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:35:36.439761 | orchestrator | 2026-01-13 00:35:36.439765 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-13 00:35:36.439768 | orchestrator | Tuesday 13 January 2026 00:35:34 +0000 (0:00:00.086) 0:00:03.886 ******* 2026-01-13 00:35:36.439772 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:35:36.439776 | orchestrator | 2026-01-13 00:35:36.439780 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-13 00:35:36.439784 | orchestrator | Tuesday 13 January 2026 00:35:35 +0000 (0:00:00.736) 0:00:04.623 ******* 2026-01-13 00:35:36.439788 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:35:36.439791 | orchestrator | 2026-01-13 00:35:36.439795 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-13 00:35:36.439799 | orchestrator | 2026-01-13 00:35:36.439812 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-13 00:35:36.439820 | orchestrator | Tuesday 13 January 2026 00:35:35 +0000 (0:00:00.117) 0:00:04.740 ******* 2026-01-13 00:35:36.439824 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:35:36.439827 | orchestrator | 2026-01-13 00:35:36.439831 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-13 00:35:36.439835 | orchestrator | Tuesday 13 January 2026 00:35:35 +0000 (0:00:00.091) 0:00:04.832 ******* 2026-01-13 00:35:36.439838 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:35:36.439842 | orchestrator | 2026-01-13 00:35:36.439846 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-13 00:35:36.439850 | orchestrator | Tuesday 13 January 2026 00:35:36 +0000 (0:00:00.696) 0:00:05.528 ******* 2026-01-13 00:35:36.439863 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:35:36.439867 | orchestrator | 2026-01-13 00:35:36.439871 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:35:36.439875 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:35:36.439880 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:35:36.439884 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:35:36.439887 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:35:36.439891 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:35:36.439895 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:35:36.439899 | orchestrator | 2026-01-13 00:35:36.439902 | orchestrator | 2026-01-13 00:35:36.439906 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:35:36.439910 | orchestrator | Tuesday 13 January 2026 00:35:36 +0000 (0:00:00.034) 0:00:05.563 ******* 2026-01-13 00:35:36.439914 | orchestrator | =============================================================================== 2026-01-13 00:35:36.439917 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.31s 2026-01-13 00:35:36.439924 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.56s 2026-01-13 00:35:36.439982 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.54s 2026-01-13 00:35:36.706425 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-13 00:35:48.880507 | orchestrator | 2026-01-13 00:35:48 | INFO  | Task 71b89266-deb6-462c-a198-3aa3a9843757 (wait-for-connection) was prepared for execution. 2026-01-13 00:35:48.880612 | orchestrator | 2026-01-13 00:35:48 | INFO  | It takes a moment until task 71b89266-deb6-462c-a198-3aa3a9843757 (wait-for-connection) has been started and output is visible here. 2026-01-13 00:36:04.765062 | orchestrator | 2026-01-13 00:36:04.765159 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-13 00:36:04.765173 | orchestrator | 2026-01-13 00:36:04.765183 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-13 00:36:04.765193 | orchestrator | Tuesday 13 January 2026 00:35:52 +0000 (0:00:00.169) 0:00:00.169 ******* 2026-01-13 00:36:04.765202 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:36:04.765212 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:36:04.765221 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:36:04.765230 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:36:04.765238 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:36:04.765247 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:36:04.765279 | orchestrator | 2026-01-13 00:36:04.765289 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:36:04.765299 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:36:04.765309 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:36:04.765318 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:36:04.765327 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:36:04.765336 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:36:04.765345 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:36:04.765353 | orchestrator | 2026-01-13 00:36:04.765362 | orchestrator | 2026-01-13 00:36:04.765371 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:36:04.765380 | orchestrator | Tuesday 13 January 2026 00:36:04 +0000 (0:00:11.477) 0:00:11.646 ******* 2026-01-13 00:36:04.765400 | orchestrator | =============================================================================== 2026-01-13 00:36:04.765409 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.48s 2026-01-13 00:36:05.160383 | orchestrator | + osism apply hddtemp 2026-01-13 00:36:17.161440 | orchestrator | 2026-01-13 00:36:17 | INFO  | Task 7ca78f17-8193-46a9-8856-5f474ebd664b (hddtemp) was prepared for execution. 2026-01-13 00:36:17.161550 | orchestrator | 2026-01-13 00:36:17 | INFO  | It takes a moment until task 7ca78f17-8193-46a9-8856-5f474ebd664b (hddtemp) has been started and output is visible here. 2026-01-13 00:36:45.040924 | orchestrator | 2026-01-13 00:36:45.041092 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-13 00:36:45.041112 | orchestrator | 2026-01-13 00:36:45.041124 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-13 00:36:45.041136 | orchestrator | Tuesday 13 January 2026 00:36:21 +0000 (0:00:00.252) 0:00:00.252 ******* 2026-01-13 00:36:45.041148 | orchestrator | ok: [testbed-manager] 2026-01-13 00:36:45.041160 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:36:45.041170 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:36:45.041181 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:36:45.041192 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:36:45.041202 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:36:45.041213 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:36:45.041224 | orchestrator | 2026-01-13 00:36:45.041235 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-13 00:36:45.041246 | orchestrator | Tuesday 13 January 2026 00:36:22 +0000 (0:00:00.684) 0:00:00.936 ******* 2026-01-13 00:36:45.041259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:36:45.041272 | orchestrator | 2026-01-13 00:36:45.041283 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-13 00:36:45.041294 | orchestrator | Tuesday 13 January 2026 00:36:23 +0000 (0:00:01.155) 0:00:02.092 ******* 2026-01-13 00:36:45.041305 | orchestrator | ok: [testbed-manager] 2026-01-13 00:36:45.041316 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:36:45.041327 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:36:45.041338 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:36:45.041349 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:36:45.041360 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:36:45.041370 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:36:45.041404 | orchestrator | 2026-01-13 00:36:45.041415 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-13 00:36:45.041429 | orchestrator | Tuesday 13 January 2026 00:36:25 +0000 (0:00:02.214) 0:00:04.307 ******* 2026-01-13 00:36:45.041441 | orchestrator | changed: [testbed-manager] 2026-01-13 00:36:45.041454 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:36:45.041467 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:36:45.041480 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:36:45.041492 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:36:45.041505 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:36:45.041517 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:36:45.041530 | orchestrator | 2026-01-13 00:36:45.041542 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-13 00:36:45.041555 | orchestrator | Tuesday 13 January 2026 00:36:26 +0000 (0:00:01.057) 0:00:05.364 ******* 2026-01-13 00:36:45.041567 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:36:45.041580 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:36:45.041592 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:36:45.041605 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:36:45.041618 | orchestrator | ok: [testbed-manager] 2026-01-13 00:36:45.041630 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:36:45.041642 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:36:45.041655 | orchestrator | 2026-01-13 00:36:45.041667 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-13 00:36:45.041680 | orchestrator | Tuesday 13 January 2026 00:36:27 +0000 (0:00:01.087) 0:00:06.451 ******* 2026-01-13 00:36:45.041694 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:36:45.041706 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:36:45.041718 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:36:45.041731 | orchestrator | changed: [testbed-manager] 2026-01-13 00:36:45.041743 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:36:45.041756 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:36:45.041775 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:36:45.041794 | orchestrator | 2026-01-13 00:36:45.041831 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-13 00:36:45.041870 | orchestrator | Tuesday 13 January 2026 00:36:28 +0000 (0:00:00.758) 0:00:07.210 ******* 2026-01-13 00:36:45.041902 | orchestrator | changed: [testbed-manager] 2026-01-13 00:36:45.041919 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:36:45.041936 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:36:45.041954 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:36:45.041971 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:36:45.041987 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:36:45.042003 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:36:45.042094 | orchestrator | 2026-01-13 00:36:45.042114 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-13 00:36:45.042134 | orchestrator | Tuesday 13 January 2026 00:36:41 +0000 (0:00:13.623) 0:00:20.834 ******* 2026-01-13 00:36:45.042156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:36:45.042177 | orchestrator | 2026-01-13 00:36:45.042198 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-13 00:36:45.042221 | orchestrator | Tuesday 13 January 2026 00:36:43 +0000 (0:00:01.165) 0:00:22.000 ******* 2026-01-13 00:36:45.042241 | orchestrator | changed: [testbed-manager] 2026-01-13 00:36:45.042258 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:36:45.042269 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:36:45.042280 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:36:45.042306 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:36:45.042317 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:36:45.042328 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:36:45.042355 | orchestrator | 2026-01-13 00:36:45.042374 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:36:45.042392 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:36:45.042437 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:36:45.042458 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:36:45.042477 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:36:45.042496 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:36:45.042515 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:36:45.042533 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:36:45.042552 | orchestrator | 2026-01-13 00:36:45.042566 | orchestrator | 2026-01-13 00:36:45.042577 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:36:45.042588 | orchestrator | Tuesday 13 January 2026 00:36:44 +0000 (0:00:01.737) 0:00:23.738 ******* 2026-01-13 00:36:45.042599 | orchestrator | =============================================================================== 2026-01-13 00:36:45.042609 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.62s 2026-01-13 00:36:45.042620 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.21s 2026-01-13 00:36:45.042630 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.74s 2026-01-13 00:36:45.042641 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.17s 2026-01-13 00:36:45.042651 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.16s 2026-01-13 00:36:45.042662 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.09s 2026-01-13 00:36:45.042672 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.06s 2026-01-13 00:36:45.042683 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.76s 2026-01-13 00:36:45.042694 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.68s 2026-01-13 00:36:45.220629 | orchestrator | ++ semver latest 7.1.1 2026-01-13 00:36:45.267481 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-13 00:36:45.267606 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-13 00:36:45.267625 | orchestrator | + sudo systemctl restart manager.service 2026-01-13 00:36:58.349153 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-13 00:36:58.349266 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-13 00:36:58.349281 | orchestrator | + local max_attempts=60 2026-01-13 00:36:58.349294 | orchestrator | + local name=ceph-ansible 2026-01-13 00:36:58.349306 | orchestrator | + local attempt_num=1 2026-01-13 00:36:58.349317 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:36:58.380367 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-13 00:36:58.380455 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-13 00:36:58.380469 | orchestrator | + sleep 5 2026-01-13 00:37:03.384082 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:37:03.407547 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-13 00:37:03.407630 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-13 00:37:03.407645 | orchestrator | + sleep 5 2026-01-13 00:37:08.409561 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:37:08.448435 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-13 00:37:08.448567 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-13 00:37:08.448593 | orchestrator | + sleep 5 2026-01-13 00:37:13.454276 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:37:13.491517 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-13 00:37:13.491633 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-13 00:37:13.491659 | orchestrator | + sleep 5 2026-01-13 00:37:18.496215 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:37:18.531760 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-13 00:37:18.531921 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-13 00:37:18.531939 | orchestrator | + sleep 5 2026-01-13 00:37:23.537021 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:37:23.573506 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-13 00:37:23.573600 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-13 00:37:23.573612 | orchestrator | + sleep 5 2026-01-13 00:37:28.577662 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:37:28.617366 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-13 00:37:28.617461 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-13 00:37:28.617475 | orchestrator | + sleep 5 2026-01-13 00:37:33.622891 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:37:33.649859 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-13 00:37:33.649934 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-13 00:37:33.649949 | orchestrator | + sleep 5 2026-01-13 00:37:38.653348 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:37:38.675170 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-13 00:37:38.675241 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-13 00:37:38.675254 | orchestrator | + sleep 5 2026-01-13 00:37:43.678377 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:37:43.715529 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-13 00:37:43.715636 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-13 00:37:43.715659 | orchestrator | + sleep 5 2026-01-13 00:37:48.720163 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:37:48.761939 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-13 00:37:48.762081 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-13 00:37:48.762098 | orchestrator | + sleep 5 2026-01-13 00:37:53.766715 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:37:53.799143 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-13 00:37:53.799205 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-13 00:37:53.799215 | orchestrator | + sleep 5 2026-01-13 00:37:58.803471 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:37:58.843283 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-13 00:37:58.843381 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-13 00:37:58.843397 | orchestrator | + sleep 5 2026-01-13 00:38:03.847977 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-13 00:38:03.886005 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-13 00:38:03.886152 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-13 00:38:03.886167 | orchestrator | + local max_attempts=60 2026-01-13 00:38:03.886180 | orchestrator | + local name=kolla-ansible 2026-01-13 00:38:03.886192 | orchestrator | + local attempt_num=1 2026-01-13 00:38:03.886832 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-13 00:38:03.912099 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-13 00:38:03.912217 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-13 00:38:03.912240 | orchestrator | + local max_attempts=60 2026-01-13 00:38:03.912254 | orchestrator | + local name=osism-ansible 2026-01-13 00:38:03.912266 | orchestrator | + local attempt_num=1 2026-01-13 00:38:03.913038 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-13 00:38:03.942012 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-13 00:38:03.942171 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-13 00:38:03.942187 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-13 00:38:04.095740 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-13 00:38:04.241250 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-13 00:38:04.400538 | orchestrator | ARA in osism-ansible already disabled. 2026-01-13 00:38:04.556033 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-13 00:38:04.556653 | orchestrator | + osism apply gather-facts 2026-01-13 00:38:16.631023 | orchestrator | 2026-01-13 00:38:16 | INFO  | Task 6ce44a44-dd4f-4cb9-89b4-2d880292c235 (gather-facts) was prepared for execution. 2026-01-13 00:38:16.631160 | orchestrator | 2026-01-13 00:38:16 | INFO  | It takes a moment until task 6ce44a44-dd4f-4cb9-89b4-2d880292c235 (gather-facts) has been started and output is visible here. 2026-01-13 00:38:29.442442 | orchestrator | 2026-01-13 00:38:29.442558 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-13 00:38:29.442577 | orchestrator | 2026-01-13 00:38:29.442590 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-13 00:38:29.442602 | orchestrator | Tuesday 13 January 2026 00:38:20 +0000 (0:00:00.186) 0:00:00.186 ******* 2026-01-13 00:38:29.442613 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:38:29.442630 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:38:29.442648 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:38:29.442666 | orchestrator | ok: [testbed-manager] 2026-01-13 00:38:29.442684 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:38:29.442703 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:38:29.442720 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:38:29.442737 | orchestrator | 2026-01-13 00:38:29.442784 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-13 00:38:29.442803 | orchestrator | 2026-01-13 00:38:29.442819 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-13 00:38:29.442838 | orchestrator | Tuesday 13 January 2026 00:38:28 +0000 (0:00:08.414) 0:00:08.600 ******* 2026-01-13 00:38:29.442856 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:38:29.442876 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:38:29.442895 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:38:29.442914 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:38:29.442926 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:38:29.442937 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:38:29.442948 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:38:29.442958 | orchestrator | 2026-01-13 00:38:29.442969 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:38:29.442982 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:38:29.442996 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:38:29.443008 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:38:29.443020 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:38:29.443032 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:38:29.443045 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:38:29.443057 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 00:38:29.443069 | orchestrator | 2026-01-13 00:38:29.443082 | orchestrator | 2026-01-13 00:38:29.443094 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:38:29.443106 | orchestrator | Tuesday 13 January 2026 00:38:29 +0000 (0:00:00.419) 0:00:09.019 ******* 2026-01-13 00:38:29.443142 | orchestrator | =============================================================================== 2026-01-13 00:38:29.443156 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.41s 2026-01-13 00:38:29.443200 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.42s 2026-01-13 00:38:29.685155 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-13 00:38:29.692428 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-13 00:38:29.699981 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-13 00:38:29.714670 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-13 00:38:29.723166 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-13 00:38:29.732282 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-13 00:38:29.740295 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-13 00:38:29.748771 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-13 00:38:29.764439 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-13 00:38:29.777510 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-13 00:38:29.785510 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-13 00:38:29.795170 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-13 00:38:29.803623 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-13 00:38:29.814391 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-13 00:38:29.826231 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-13 00:38:29.841246 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-13 00:38:29.856822 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-13 00:38:29.868078 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-13 00:38:29.879608 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-13 00:38:29.896986 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-13 00:38:29.912870 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-13 00:38:30.383032 | orchestrator | ok: Runtime: 0:24:20.888740 2026-01-13 00:38:30.511462 | 2026-01-13 00:38:30.511636 | TASK [Deploy services] 2026-01-13 00:38:31.047339 | orchestrator | skipping: Conditional result was False 2026-01-13 00:38:31.067799 | 2026-01-13 00:38:31.067997 | TASK [Deploy in a nutshell] 2026-01-13 00:38:31.866218 | orchestrator | + set -e 2026-01-13 00:38:31.867627 | orchestrator | 2026-01-13 00:38:31.867672 | orchestrator | # PULL IMAGES 2026-01-13 00:38:31.867687 | orchestrator | 2026-01-13 00:38:31.867706 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-13 00:38:31.867726 | orchestrator | ++ export INTERACTIVE=false 2026-01-13 00:38:31.867741 | orchestrator | ++ INTERACTIVE=false 2026-01-13 00:38:31.867817 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-13 00:38:31.867844 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-13 00:38:31.867860 | orchestrator | + source /opt/manager-vars.sh 2026-01-13 00:38:31.867872 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-13 00:38:31.867891 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-13 00:38:31.867902 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-13 00:38:31.867920 | orchestrator | ++ CEPH_VERSION=reef 2026-01-13 00:38:31.867932 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-13 00:38:31.867950 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-13 00:38:31.867961 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-13 00:38:31.867976 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-13 00:38:31.867987 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-13 00:38:31.868004 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-13 00:38:31.868023 | orchestrator | ++ export ARA=false 2026-01-13 00:38:31.868041 | orchestrator | ++ ARA=false 2026-01-13 00:38:31.868060 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-13 00:38:31.868079 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-13 00:38:31.868097 | orchestrator | ++ export TEMPEST=true 2026-01-13 00:38:31.868114 | orchestrator | ++ TEMPEST=true 2026-01-13 00:38:31.868134 | orchestrator | ++ export IS_ZUUL=true 2026-01-13 00:38:31.868154 | orchestrator | ++ IS_ZUUL=true 2026-01-13 00:38:31.868173 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.234 2026-01-13 00:38:31.868193 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.234 2026-01-13 00:38:31.868210 | orchestrator | ++ export EXTERNAL_API=false 2026-01-13 00:38:31.868229 | orchestrator | ++ EXTERNAL_API=false 2026-01-13 00:38:31.868247 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-13 00:38:31.868267 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-13 00:38:31.868286 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-13 00:38:31.868306 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-13 00:38:31.868324 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-13 00:38:31.868337 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-13 00:38:31.868349 | orchestrator | + echo 2026-01-13 00:38:31.868360 | orchestrator | + echo '# PULL IMAGES' 2026-01-13 00:38:31.868371 | orchestrator | + echo 2026-01-13 00:38:31.868401 | orchestrator | ++ semver latest 7.0.0 2026-01-13 00:38:31.926223 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-13 00:38:31.926306 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-13 00:38:31.926320 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-13 00:38:33.829050 | orchestrator | 2026-01-13 00:38:33 | INFO  | Trying to run play pull-images in environment custom 2026-01-13 00:38:44.046896 | orchestrator | 2026-01-13 00:38:44 | INFO  | Task 6e14501f-fcd9-40f9-abe9-92730c7570b3 (pull-images) was prepared for execution. 2026-01-13 00:38:44.047036 | orchestrator | 2026-01-13 00:38:44 | INFO  | Task 6e14501f-fcd9-40f9-abe9-92730c7570b3 is running in background. No more output. Check ARA for logs. 2026-01-13 00:38:46.294388 | orchestrator | 2026-01-13 00:38:46 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-13 00:38:56.464090 | orchestrator | 2026-01-13 00:38:56 | INFO  | Task 2c21fb30-2dda-4d49-a0de-66a5504a8069 (wipe-partitions) was prepared for execution. 2026-01-13 00:38:56.464216 | orchestrator | 2026-01-13 00:38:56 | INFO  | It takes a moment until task 2c21fb30-2dda-4d49-a0de-66a5504a8069 (wipe-partitions) has been started and output is visible here. 2026-01-13 00:39:10.693560 | orchestrator | 2026-01-13 00:39:10.693672 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-13 00:39:10.693689 | orchestrator | 2026-01-13 00:39:10.693701 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-13 00:39:10.693766 | orchestrator | Tuesday 13 January 2026 00:39:00 +0000 (0:00:00.134) 0:00:00.134 ******* 2026-01-13 00:39:10.693788 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:39:10.693809 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:39:10.693829 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:39:10.693845 | orchestrator | 2026-01-13 00:39:10.693856 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-13 00:39:10.693891 | orchestrator | Tuesday 13 January 2026 00:39:01 +0000 (0:00:00.600) 0:00:00.735 ******* 2026-01-13 00:39:10.693903 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:10.693914 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:39:10.693928 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:39:10.693939 | orchestrator | 2026-01-13 00:39:10.693951 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-13 00:39:10.693962 | orchestrator | Tuesday 13 January 2026 00:39:01 +0000 (0:00:00.325) 0:00:01.060 ******* 2026-01-13 00:39:10.693973 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:39:10.693984 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:39:10.693995 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:39:10.694005 | orchestrator | 2026-01-13 00:39:10.694068 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-13 00:39:10.694081 | orchestrator | Tuesday 13 January 2026 00:39:02 +0000 (0:00:00.568) 0:00:01.629 ******* 2026-01-13 00:39:10.694094 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:10.694106 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:39:10.694119 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:39:10.694130 | orchestrator | 2026-01-13 00:39:10.694142 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-13 00:39:10.694154 | orchestrator | Tuesday 13 January 2026 00:39:02 +0000 (0:00:00.238) 0:00:01.867 ******* 2026-01-13 00:39:10.694174 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-13 00:39:10.694196 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-13 00:39:10.694215 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-13 00:39:10.694233 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-13 00:39:10.694251 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-13 00:39:10.694270 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-13 00:39:10.694290 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-13 00:39:10.694309 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-13 00:39:10.694327 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-13 00:39:10.694339 | orchestrator | 2026-01-13 00:39:10.694352 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-13 00:39:10.694366 | orchestrator | Tuesday 13 January 2026 00:39:03 +0000 (0:00:01.160) 0:00:03.028 ******* 2026-01-13 00:39:10.694378 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-13 00:39:10.694391 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-13 00:39:10.694403 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-13 00:39:10.694415 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-13 00:39:10.694427 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-13 00:39:10.694439 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-13 00:39:10.694451 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-13 00:39:10.694463 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-13 00:39:10.694474 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-13 00:39:10.694484 | orchestrator | 2026-01-13 00:39:10.694495 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-13 00:39:10.694506 | orchestrator | Tuesday 13 January 2026 00:39:05 +0000 (0:00:01.475) 0:00:04.503 ******* 2026-01-13 00:39:10.694516 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-13 00:39:10.694527 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-13 00:39:10.694538 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-13 00:39:10.694548 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-13 00:39:10.694559 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-13 00:39:10.694569 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-13 00:39:10.694580 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-13 00:39:10.694604 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-13 00:39:10.694622 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-13 00:39:10.694633 | orchestrator | 2026-01-13 00:39:10.694647 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-13 00:39:10.694665 | orchestrator | Tuesday 13 January 2026 00:39:09 +0000 (0:00:04.019) 0:00:08.523 ******* 2026-01-13 00:39:10.694683 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:39:10.694699 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:39:10.694716 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:39:10.694814 | orchestrator | 2026-01-13 00:39:10.694835 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-13 00:39:10.694854 | orchestrator | Tuesday 13 January 2026 00:39:09 +0000 (0:00:00.596) 0:00:09.120 ******* 2026-01-13 00:39:10.694872 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:39:10.694890 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:39:10.694901 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:39:10.694911 | orchestrator | 2026-01-13 00:39:10.694922 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:39:10.694943 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:39:10.694964 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:39:10.695014 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:39:10.695034 | orchestrator | 2026-01-13 00:39:10.695047 | orchestrator | 2026-01-13 00:39:10.695058 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:39:10.695068 | orchestrator | Tuesday 13 January 2026 00:39:10 +0000 (0:00:00.602) 0:00:09.722 ******* 2026-01-13 00:39:10.695079 | orchestrator | =============================================================================== 2026-01-13 00:39:10.695089 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 4.02s 2026-01-13 00:39:10.695100 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.48s 2026-01-13 00:39:10.695111 | orchestrator | Check device availability ----------------------------------------------- 1.16s 2026-01-13 00:39:10.695122 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2026-01-13 00:39:10.695132 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2026-01-13 00:39:10.695143 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2026-01-13 00:39:10.695153 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.57s 2026-01-13 00:39:10.695164 | orchestrator | Remove all rook related logical devices --------------------------------- 0.33s 2026-01-13 00:39:10.695174 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-01-13 00:39:23.013698 | orchestrator | 2026-01-13 00:39:23 | INFO  | Task f01a7713-5f5f-42c7-b15b-1d9d3ceefd85 (facts) was prepared for execution. 2026-01-13 00:39:23.013852 | orchestrator | 2026-01-13 00:39:23 | INFO  | It takes a moment until task f01a7713-5f5f-42c7-b15b-1d9d3ceefd85 (facts) has been started and output is visible here. 2026-01-13 00:39:34.755525 | orchestrator | 2026-01-13 00:39:34.755636 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-13 00:39:34.755653 | orchestrator | 2026-01-13 00:39:34.755665 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-13 00:39:34.755678 | orchestrator | Tuesday 13 January 2026 00:39:27 +0000 (0:00:00.254) 0:00:00.254 ******* 2026-01-13 00:39:34.755689 | orchestrator | ok: [testbed-manager] 2026-01-13 00:39:34.755763 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:39:34.755778 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:39:34.755816 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:39:34.755828 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:39:34.755839 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:39:34.755849 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:39:34.755860 | orchestrator | 2026-01-13 00:39:34.755871 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-13 00:39:34.755882 | orchestrator | Tuesday 13 January 2026 00:39:28 +0000 (0:00:00.912) 0:00:01.166 ******* 2026-01-13 00:39:34.755893 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:39:34.755904 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:39:34.755914 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:39:34.755925 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:39:34.755936 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:34.755946 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:39:34.755957 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:39:34.755967 | orchestrator | 2026-01-13 00:39:34.755978 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-13 00:39:34.755989 | orchestrator | 2026-01-13 00:39:34.756015 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-13 00:39:34.756026 | orchestrator | Tuesday 13 January 2026 00:39:29 +0000 (0:00:01.042) 0:00:02.209 ******* 2026-01-13 00:39:34.756037 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:39:34.756047 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:39:34.756059 | orchestrator | ok: [testbed-manager] 2026-01-13 00:39:34.756070 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:39:34.756102 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:39:34.756113 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:39:34.756124 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:39:34.756135 | orchestrator | 2026-01-13 00:39:34.756146 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-13 00:39:34.756157 | orchestrator | 2026-01-13 00:39:34.756167 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-13 00:39:34.756178 | orchestrator | Tuesday 13 January 2026 00:39:33 +0000 (0:00:04.791) 0:00:07.000 ******* 2026-01-13 00:39:34.756189 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:39:34.756199 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:39:34.756210 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:39:34.756221 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:39:34.756231 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:34.756242 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:39:34.756252 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:39:34.756263 | orchestrator | 2026-01-13 00:39:34.756274 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:39:34.756285 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:39:34.756297 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:39:34.756308 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:39:34.756319 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:39:34.756329 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:39:34.756340 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:39:34.756351 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:39:34.756362 | orchestrator | 2026-01-13 00:39:34.756382 | orchestrator | 2026-01-13 00:39:34.756393 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:39:34.756404 | orchestrator | Tuesday 13 January 2026 00:39:34 +0000 (0:00:00.495) 0:00:07.496 ******* 2026-01-13 00:39:34.756414 | orchestrator | =============================================================================== 2026-01-13 00:39:34.756425 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.79s 2026-01-13 00:39:34.756436 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.04s 2026-01-13 00:39:34.756447 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.91s 2026-01-13 00:39:34.756458 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-01-13 00:39:37.049317 | orchestrator | 2026-01-13 00:39:37 | INFO  | Task 1aaf4672-483a-4030-a225-6812d7e6d7f1 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-13 00:39:37.049417 | orchestrator | 2026-01-13 00:39:37 | INFO  | It takes a moment until task 1aaf4672-483a-4030-a225-6812d7e6d7f1 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-13 00:39:48.047914 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-13 00:39:48.048022 | orchestrator | 2.16.14 2026-01-13 00:39:48.048039 | orchestrator | 2026-01-13 00:39:48.048052 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-13 00:39:48.048064 | orchestrator | 2026-01-13 00:39:48.048075 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-13 00:39:48.048087 | orchestrator | Tuesday 13 January 2026 00:39:41 +0000 (0:00:00.305) 0:00:00.305 ******* 2026-01-13 00:39:48.048099 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-13 00:39:48.048110 | orchestrator | 2026-01-13 00:39:48.048121 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-13 00:39:48.048132 | orchestrator | Tuesday 13 January 2026 00:39:41 +0000 (0:00:00.229) 0:00:00.535 ******* 2026-01-13 00:39:48.048143 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:39:48.048154 | orchestrator | 2026-01-13 00:39:48.048164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:39:48.048175 | orchestrator | Tuesday 13 January 2026 00:39:41 +0000 (0:00:00.208) 0:00:00.743 ******* 2026-01-13 00:39:48.048187 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-13 00:39:48.048207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-13 00:39:48.048218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-13 00:39:48.048229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-13 00:39:48.048240 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-13 00:39:48.048251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-13 00:39:48.048261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-13 00:39:48.048272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-13 00:39:48.048283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-13 00:39:48.048294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-13 00:39:48.048305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-13 00:39:48.048315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-13 00:39:48.048326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-13 00:39:48.048337 | orchestrator | 2026-01-13 00:39:48.048348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:39:48.048376 | orchestrator | Tuesday 13 January 2026 00:39:42 +0000 (0:00:00.461) 0:00:01.205 ******* 2026-01-13 00:39:48.048388 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.048398 | orchestrator | 2026-01-13 00:39:48.048409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:39:48.048420 | orchestrator | Tuesday 13 January 2026 00:39:42 +0000 (0:00:00.187) 0:00:01.393 ******* 2026-01-13 00:39:48.048431 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.048443 | orchestrator | 2026-01-13 00:39:48.048455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:39:48.048468 | orchestrator | Tuesday 13 January 2026 00:39:42 +0000 (0:00:00.168) 0:00:01.561 ******* 2026-01-13 00:39:48.048480 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.048494 | orchestrator | 2026-01-13 00:39:48.048507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:39:48.048525 | orchestrator | Tuesday 13 January 2026 00:39:42 +0000 (0:00:00.179) 0:00:01.741 ******* 2026-01-13 00:39:48.048538 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.048550 | orchestrator | 2026-01-13 00:39:48.048564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:39:48.048576 | orchestrator | Tuesday 13 January 2026 00:39:42 +0000 (0:00:00.176) 0:00:01.917 ******* 2026-01-13 00:39:48.048589 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.048602 | orchestrator | 2026-01-13 00:39:48.048614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:39:48.048628 | orchestrator | Tuesday 13 January 2026 00:39:43 +0000 (0:00:00.185) 0:00:02.102 ******* 2026-01-13 00:39:48.048640 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.048653 | orchestrator | 2026-01-13 00:39:48.048665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:39:48.048677 | orchestrator | Tuesday 13 January 2026 00:39:43 +0000 (0:00:00.185) 0:00:02.288 ******* 2026-01-13 00:39:48.048689 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.048738 | orchestrator | 2026-01-13 00:39:48.048757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:39:48.048795 | orchestrator | Tuesday 13 January 2026 00:39:43 +0000 (0:00:00.193) 0:00:02.482 ******* 2026-01-13 00:39:48.048826 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.048837 | orchestrator | 2026-01-13 00:39:48.048848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:39:48.048859 | orchestrator | Tuesday 13 January 2026 00:39:43 +0000 (0:00:00.183) 0:00:02.666 ******* 2026-01-13 00:39:48.048870 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315) 2026-01-13 00:39:48.048882 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315) 2026-01-13 00:39:48.048893 | orchestrator | 2026-01-13 00:39:48.048904 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:39:48.048931 | orchestrator | Tuesday 13 January 2026 00:39:44 +0000 (0:00:00.425) 0:00:03.091 ******* 2026-01-13 00:39:48.048942 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_49cd33e4-72cd-4f3f-940d-55c9f0f00a98) 2026-01-13 00:39:48.048960 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_49cd33e4-72cd-4f3f-940d-55c9f0f00a98) 2026-01-13 00:39:48.048971 | orchestrator | 2026-01-13 00:39:48.048982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:39:48.048993 | orchestrator | Tuesday 13 January 2026 00:39:44 +0000 (0:00:00.576) 0:00:03.668 ******* 2026-01-13 00:39:48.049003 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1f00cc32-4927-4d99-9c1e-b649b1d1f573) 2026-01-13 00:39:48.049014 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1f00cc32-4927-4d99-9c1e-b649b1d1f573) 2026-01-13 00:39:48.049025 | orchestrator | 2026-01-13 00:39:48.049036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:39:48.049056 | orchestrator | Tuesday 13 January 2026 00:39:45 +0000 (0:00:00.565) 0:00:04.234 ******* 2026-01-13 00:39:48.049066 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0a292857-8cd9-4a14-95ba-a5d022f4a90e) 2026-01-13 00:39:48.049077 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0a292857-8cd9-4a14-95ba-a5d022f4a90e) 2026-01-13 00:39:48.049088 | orchestrator | 2026-01-13 00:39:48.049099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:39:48.049110 | orchestrator | Tuesday 13 January 2026 00:39:46 +0000 (0:00:00.770) 0:00:05.005 ******* 2026-01-13 00:39:48.049120 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-13 00:39:48.049131 | orchestrator | 2026-01-13 00:39:48.049142 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:39:48.049153 | orchestrator | Tuesday 13 January 2026 00:39:46 +0000 (0:00:00.332) 0:00:05.337 ******* 2026-01-13 00:39:48.049163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-13 00:39:48.049174 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-13 00:39:48.049185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-13 00:39:48.049195 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-13 00:39:48.049206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-13 00:39:48.049218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-13 00:39:48.049236 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-13 00:39:48.049253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-13 00:39:48.049272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-13 00:39:48.049288 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-13 00:39:48.049304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-13 00:39:48.049319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-13 00:39:48.049338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-13 00:39:48.049355 | orchestrator | 2026-01-13 00:39:48.049373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:39:48.049392 | orchestrator | Tuesday 13 January 2026 00:39:46 +0000 (0:00:00.350) 0:00:05.688 ******* 2026-01-13 00:39:48.049411 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.049426 | orchestrator | 2026-01-13 00:39:48.049437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:39:48.049448 | orchestrator | Tuesday 13 January 2026 00:39:46 +0000 (0:00:00.189) 0:00:05.877 ******* 2026-01-13 00:39:48.049458 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.049469 | orchestrator | 2026-01-13 00:39:48.049479 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:39:48.049490 | orchestrator | Tuesday 13 January 2026 00:39:47 +0000 (0:00:00.188) 0:00:06.066 ******* 2026-01-13 00:39:48.049501 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.049511 | orchestrator | 2026-01-13 00:39:48.049522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:39:48.049532 | orchestrator | Tuesday 13 January 2026 00:39:47 +0000 (0:00:00.181) 0:00:06.247 ******* 2026-01-13 00:39:48.049543 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.049554 | orchestrator | 2026-01-13 00:39:48.049564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:39:48.049575 | orchestrator | Tuesday 13 January 2026 00:39:47 +0000 (0:00:00.182) 0:00:06.429 ******* 2026-01-13 00:39:48.049593 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.049604 | orchestrator | 2026-01-13 00:39:48.049615 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:39:48.049626 | orchestrator | Tuesday 13 January 2026 00:39:47 +0000 (0:00:00.180) 0:00:06.610 ******* 2026-01-13 00:39:48.049636 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.049647 | orchestrator | 2026-01-13 00:39:48.049657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:39:48.049668 | orchestrator | Tuesday 13 January 2026 00:39:47 +0000 (0:00:00.188) 0:00:06.798 ******* 2026-01-13 00:39:48.049679 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:48.049713 | orchestrator | 2026-01-13 00:39:48.049741 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:39:55.055523 | orchestrator | Tuesday 13 January 2026 00:39:48 +0000 (0:00:00.200) 0:00:06.998 ******* 2026-01-13 00:39:55.055627 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.055640 | orchestrator | 2026-01-13 00:39:55.055650 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:39:55.055660 | orchestrator | Tuesday 13 January 2026 00:39:48 +0000 (0:00:00.189) 0:00:07.188 ******* 2026-01-13 00:39:55.055668 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-13 00:39:55.055717 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-13 00:39:55.055727 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-13 00:39:55.055735 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-13 00:39:55.055743 | orchestrator | 2026-01-13 00:39:55.055752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:39:55.055760 | orchestrator | Tuesday 13 January 2026 00:39:49 +0000 (0:00:00.919) 0:00:08.107 ******* 2026-01-13 00:39:55.055768 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.055776 | orchestrator | 2026-01-13 00:39:55.055784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:39:55.055792 | orchestrator | Tuesday 13 January 2026 00:39:49 +0000 (0:00:00.180) 0:00:08.288 ******* 2026-01-13 00:39:55.055800 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.055808 | orchestrator | 2026-01-13 00:39:55.055816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:39:55.055824 | orchestrator | Tuesday 13 January 2026 00:39:49 +0000 (0:00:00.185) 0:00:08.473 ******* 2026-01-13 00:39:55.055831 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.055839 | orchestrator | 2026-01-13 00:39:55.055847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:39:55.055855 | orchestrator | Tuesday 13 January 2026 00:39:49 +0000 (0:00:00.190) 0:00:08.663 ******* 2026-01-13 00:39:55.055863 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.055871 | orchestrator | 2026-01-13 00:39:55.055878 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-13 00:39:55.055886 | orchestrator | Tuesday 13 January 2026 00:39:49 +0000 (0:00:00.192) 0:00:08.855 ******* 2026-01-13 00:39:55.055894 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-13 00:39:55.055902 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-13 00:39:55.055910 | orchestrator | 2026-01-13 00:39:55.055918 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-13 00:39:55.055926 | orchestrator | Tuesday 13 January 2026 00:39:50 +0000 (0:00:00.163) 0:00:09.018 ******* 2026-01-13 00:39:55.055934 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.055942 | orchestrator | 2026-01-13 00:39:55.055950 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-13 00:39:55.055957 | orchestrator | Tuesday 13 January 2026 00:39:50 +0000 (0:00:00.121) 0:00:09.139 ******* 2026-01-13 00:39:55.055965 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.055973 | orchestrator | 2026-01-13 00:39:55.055981 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-13 00:39:55.056011 | orchestrator | Tuesday 13 January 2026 00:39:50 +0000 (0:00:00.122) 0:00:09.262 ******* 2026-01-13 00:39:55.056020 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.056029 | orchestrator | 2026-01-13 00:39:55.056039 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-13 00:39:55.056050 | orchestrator | Tuesday 13 January 2026 00:39:50 +0000 (0:00:00.112) 0:00:09.375 ******* 2026-01-13 00:39:55.056064 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:39:55.056077 | orchestrator | 2026-01-13 00:39:55.056091 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-13 00:39:55.056104 | orchestrator | Tuesday 13 January 2026 00:39:50 +0000 (0:00:00.125) 0:00:09.500 ******* 2026-01-13 00:39:55.056119 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b9be54a9-cd9c-568c-9220-61b18da052d9'}}) 2026-01-13 00:39:55.056134 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '03961d85-1922-5669-8251-0ccc6cca9fac'}}) 2026-01-13 00:39:55.056147 | orchestrator | 2026-01-13 00:39:55.056161 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-13 00:39:55.056175 | orchestrator | Tuesday 13 January 2026 00:39:50 +0000 (0:00:00.153) 0:00:09.654 ******* 2026-01-13 00:39:55.056187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b9be54a9-cd9c-568c-9220-61b18da052d9'}})  2026-01-13 00:39:55.056202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '03961d85-1922-5669-8251-0ccc6cca9fac'}})  2026-01-13 00:39:55.056210 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.056218 | orchestrator | 2026-01-13 00:39:55.056226 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-13 00:39:55.056234 | orchestrator | Tuesday 13 January 2026 00:39:50 +0000 (0:00:00.140) 0:00:09.795 ******* 2026-01-13 00:39:55.056246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b9be54a9-cd9c-568c-9220-61b18da052d9'}})  2026-01-13 00:39:55.056259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '03961d85-1922-5669-8251-0ccc6cca9fac'}})  2026-01-13 00:39:55.056270 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.056281 | orchestrator | 2026-01-13 00:39:55.056292 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-13 00:39:55.056304 | orchestrator | Tuesday 13 January 2026 00:39:51 +0000 (0:00:00.308) 0:00:10.103 ******* 2026-01-13 00:39:55.056316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b9be54a9-cd9c-568c-9220-61b18da052d9'}})  2026-01-13 00:39:55.056348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '03961d85-1922-5669-8251-0ccc6cca9fac'}})  2026-01-13 00:39:55.056363 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.056378 | orchestrator | 2026-01-13 00:39:55.056392 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-13 00:39:55.056407 | orchestrator | Tuesday 13 January 2026 00:39:51 +0000 (0:00:00.136) 0:00:10.240 ******* 2026-01-13 00:39:55.056421 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:39:55.056436 | orchestrator | 2026-01-13 00:39:55.056450 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-13 00:39:55.056465 | orchestrator | Tuesday 13 January 2026 00:39:51 +0000 (0:00:00.129) 0:00:10.370 ******* 2026-01-13 00:39:55.056479 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:39:55.056494 | orchestrator | 2026-01-13 00:39:55.056508 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-13 00:39:55.056522 | orchestrator | Tuesday 13 January 2026 00:39:51 +0000 (0:00:00.145) 0:00:10.515 ******* 2026-01-13 00:39:55.056537 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.056551 | orchestrator | 2026-01-13 00:39:55.056565 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-13 00:39:55.056580 | orchestrator | Tuesday 13 January 2026 00:39:51 +0000 (0:00:00.133) 0:00:10.648 ******* 2026-01-13 00:39:55.056606 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.056621 | orchestrator | 2026-01-13 00:39:55.056635 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-13 00:39:55.056649 | orchestrator | Tuesday 13 January 2026 00:39:51 +0000 (0:00:00.115) 0:00:10.764 ******* 2026-01-13 00:39:55.056663 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.056678 | orchestrator | 2026-01-13 00:39:55.056716 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-13 00:39:55.056731 | orchestrator | Tuesday 13 January 2026 00:39:51 +0000 (0:00:00.136) 0:00:10.900 ******* 2026-01-13 00:39:55.056745 | orchestrator | ok: [testbed-node-3] => { 2026-01-13 00:39:55.056760 | orchestrator |  "ceph_osd_devices": { 2026-01-13 00:39:55.056774 | orchestrator |  "sdb": { 2026-01-13 00:39:55.056789 | orchestrator |  "osd_lvm_uuid": "b9be54a9-cd9c-568c-9220-61b18da052d9" 2026-01-13 00:39:55.056804 | orchestrator |  }, 2026-01-13 00:39:55.056818 | orchestrator |  "sdc": { 2026-01-13 00:39:55.056833 | orchestrator |  "osd_lvm_uuid": "03961d85-1922-5669-8251-0ccc6cca9fac" 2026-01-13 00:39:55.056847 | orchestrator |  } 2026-01-13 00:39:55.056861 | orchestrator |  } 2026-01-13 00:39:55.056876 | orchestrator | } 2026-01-13 00:39:55.056891 | orchestrator | 2026-01-13 00:39:55.056905 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-13 00:39:55.056926 | orchestrator | Tuesday 13 January 2026 00:39:52 +0000 (0:00:00.141) 0:00:11.041 ******* 2026-01-13 00:39:55.056942 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.056956 | orchestrator | 2026-01-13 00:39:55.056971 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-13 00:39:55.056985 | orchestrator | Tuesday 13 January 2026 00:39:52 +0000 (0:00:00.134) 0:00:11.176 ******* 2026-01-13 00:39:55.057000 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.057014 | orchestrator | 2026-01-13 00:39:55.057028 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-13 00:39:55.057043 | orchestrator | Tuesday 13 January 2026 00:39:52 +0000 (0:00:00.131) 0:00:11.307 ******* 2026-01-13 00:39:55.057057 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:39:55.057071 | orchestrator | 2026-01-13 00:39:55.057086 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-13 00:39:55.057098 | orchestrator | Tuesday 13 January 2026 00:39:52 +0000 (0:00:00.124) 0:00:11.431 ******* 2026-01-13 00:39:55.057111 | orchestrator | changed: [testbed-node-3] => { 2026-01-13 00:39:55.057124 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-13 00:39:55.057136 | orchestrator |  "ceph_osd_devices": { 2026-01-13 00:39:55.057148 | orchestrator |  "sdb": { 2026-01-13 00:39:55.057161 | orchestrator |  "osd_lvm_uuid": "b9be54a9-cd9c-568c-9220-61b18da052d9" 2026-01-13 00:39:55.057176 | orchestrator |  }, 2026-01-13 00:39:55.057190 | orchestrator |  "sdc": { 2026-01-13 00:39:55.057204 | orchestrator |  "osd_lvm_uuid": "03961d85-1922-5669-8251-0ccc6cca9fac" 2026-01-13 00:39:55.057217 | orchestrator |  } 2026-01-13 00:39:55.057231 | orchestrator |  }, 2026-01-13 00:39:55.057241 | orchestrator |  "lvm_volumes": [ 2026-01-13 00:39:55.057249 | orchestrator |  { 2026-01-13 00:39:55.057257 | orchestrator |  "data": "osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9", 2026-01-13 00:39:55.057264 | orchestrator |  "data_vg": "ceph-b9be54a9-cd9c-568c-9220-61b18da052d9" 2026-01-13 00:39:55.057272 | orchestrator |  }, 2026-01-13 00:39:55.057280 | orchestrator |  { 2026-01-13 00:39:55.057288 | orchestrator |  "data": "osd-block-03961d85-1922-5669-8251-0ccc6cca9fac", 2026-01-13 00:39:55.057295 | orchestrator |  "data_vg": "ceph-03961d85-1922-5669-8251-0ccc6cca9fac" 2026-01-13 00:39:55.057303 | orchestrator |  } 2026-01-13 00:39:55.057311 | orchestrator |  ] 2026-01-13 00:39:55.057319 | orchestrator |  } 2026-01-13 00:39:55.057333 | orchestrator | } 2026-01-13 00:39:55.057341 | orchestrator | 2026-01-13 00:39:55.057349 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-13 00:39:55.057357 | orchestrator | Tuesday 13 January 2026 00:39:52 +0000 (0:00:00.375) 0:00:11.807 ******* 2026-01-13 00:39:55.057365 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-13 00:39:55.057372 | orchestrator | 2026-01-13 00:39:55.057380 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-13 00:39:55.057388 | orchestrator | 2026-01-13 00:39:55.057396 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-13 00:39:55.057404 | orchestrator | Tuesday 13 January 2026 00:39:54 +0000 (0:00:01.731) 0:00:13.538 ******* 2026-01-13 00:39:55.057411 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-13 00:39:55.057419 | orchestrator | 2026-01-13 00:39:55.057427 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-13 00:39:55.057435 | orchestrator | Tuesday 13 January 2026 00:39:54 +0000 (0:00:00.250) 0:00:13.789 ******* 2026-01-13 00:39:55.057442 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:39:55.057450 | orchestrator | 2026-01-13 00:39:55.057465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:02.356238 | orchestrator | Tuesday 13 January 2026 00:39:55 +0000 (0:00:00.222) 0:00:14.011 ******* 2026-01-13 00:40:02.356341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-13 00:40:02.356355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-13 00:40:02.356363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-13 00:40:02.356371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-13 00:40:02.356379 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-13 00:40:02.356388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-13 00:40:02.356396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-13 00:40:02.356416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-13 00:40:02.356421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-13 00:40:02.356426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-13 00:40:02.356431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-13 00:40:02.356439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-13 00:40:02.356444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-13 00:40:02.356449 | orchestrator | 2026-01-13 00:40:02.356454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:02.356459 | orchestrator | Tuesday 13 January 2026 00:39:55 +0000 (0:00:00.345) 0:00:14.357 ******* 2026-01-13 00:40:02.356464 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.356469 | orchestrator | 2026-01-13 00:40:02.356474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:02.356478 | orchestrator | Tuesday 13 January 2026 00:39:55 +0000 (0:00:00.172) 0:00:14.529 ******* 2026-01-13 00:40:02.356483 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.356487 | orchestrator | 2026-01-13 00:40:02.356492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:02.356496 | orchestrator | Tuesday 13 January 2026 00:39:55 +0000 (0:00:00.182) 0:00:14.712 ******* 2026-01-13 00:40:02.356501 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.356505 | orchestrator | 2026-01-13 00:40:02.356510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:02.356529 | orchestrator | Tuesday 13 January 2026 00:39:55 +0000 (0:00:00.186) 0:00:14.898 ******* 2026-01-13 00:40:02.356534 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.356542 | orchestrator | 2026-01-13 00:40:02.356549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:02.356556 | orchestrator | Tuesday 13 January 2026 00:39:56 +0000 (0:00:00.184) 0:00:15.083 ******* 2026-01-13 00:40:02.356563 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.356570 | orchestrator | 2026-01-13 00:40:02.356577 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:02.356585 | orchestrator | Tuesday 13 January 2026 00:39:56 +0000 (0:00:00.508) 0:00:15.591 ******* 2026-01-13 00:40:02.356591 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.356599 | orchestrator | 2026-01-13 00:40:02.356606 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:02.356614 | orchestrator | Tuesday 13 January 2026 00:39:56 +0000 (0:00:00.180) 0:00:15.772 ******* 2026-01-13 00:40:02.356621 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.356629 | orchestrator | 2026-01-13 00:40:02.356637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:02.356645 | orchestrator | Tuesday 13 January 2026 00:39:56 +0000 (0:00:00.184) 0:00:15.956 ******* 2026-01-13 00:40:02.356653 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.356660 | orchestrator | 2026-01-13 00:40:02.356668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:02.356677 | orchestrator | Tuesday 13 January 2026 00:39:57 +0000 (0:00:00.188) 0:00:16.144 ******* 2026-01-13 00:40:02.356726 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d) 2026-01-13 00:40:02.356735 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d) 2026-01-13 00:40:02.356742 | orchestrator | 2026-01-13 00:40:02.356749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:02.356757 | orchestrator | Tuesday 13 January 2026 00:39:57 +0000 (0:00:00.392) 0:00:16.537 ******* 2026-01-13 00:40:02.356764 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6ad71b9e-76db-4ac5-b372-050f59253056) 2026-01-13 00:40:02.356772 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6ad71b9e-76db-4ac5-b372-050f59253056) 2026-01-13 00:40:02.356780 | orchestrator | 2026-01-13 00:40:02.356787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:02.356796 | orchestrator | Tuesday 13 January 2026 00:39:57 +0000 (0:00:00.390) 0:00:16.928 ******* 2026-01-13 00:40:02.356802 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9db8234e-f6a8-4211-a809-87a509109e78) 2026-01-13 00:40:02.356807 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9db8234e-f6a8-4211-a809-87a509109e78) 2026-01-13 00:40:02.356813 | orchestrator | 2026-01-13 00:40:02.356819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:02.356923 | orchestrator | Tuesday 13 January 2026 00:39:58 +0000 (0:00:00.404) 0:00:17.333 ******* 2026-01-13 00:40:02.356937 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5c0bff01-3898-4d25-903e-2ecdf087243c) 2026-01-13 00:40:02.356945 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5c0bff01-3898-4d25-903e-2ecdf087243c) 2026-01-13 00:40:02.356953 | orchestrator | 2026-01-13 00:40:02.356967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:02.356973 | orchestrator | Tuesday 13 January 2026 00:39:58 +0000 (0:00:00.395) 0:00:17.729 ******* 2026-01-13 00:40:02.356979 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-13 00:40:02.356984 | orchestrator | 2026-01-13 00:40:02.356990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:02.356995 | orchestrator | Tuesday 13 January 2026 00:39:59 +0000 (0:00:00.322) 0:00:18.051 ******* 2026-01-13 00:40:02.357008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-13 00:40:02.357014 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-13 00:40:02.357020 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-13 00:40:02.357025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-13 00:40:02.357030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-13 00:40:02.357035 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-13 00:40:02.357040 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-13 00:40:02.357046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-13 00:40:02.357051 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-13 00:40:02.357056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-13 00:40:02.357061 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-13 00:40:02.357067 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-13 00:40:02.357072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-13 00:40:02.357078 | orchestrator | 2026-01-13 00:40:02.357083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:02.357088 | orchestrator | Tuesday 13 January 2026 00:39:59 +0000 (0:00:00.354) 0:00:18.406 ******* 2026-01-13 00:40:02.357094 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.357098 | orchestrator | 2026-01-13 00:40:02.357103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:02.357107 | orchestrator | Tuesday 13 January 2026 00:40:00 +0000 (0:00:00.669) 0:00:19.076 ******* 2026-01-13 00:40:02.357112 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.357116 | orchestrator | 2026-01-13 00:40:02.357121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:02.357125 | orchestrator | Tuesday 13 January 2026 00:40:00 +0000 (0:00:00.191) 0:00:19.268 ******* 2026-01-13 00:40:02.357129 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.357134 | orchestrator | 2026-01-13 00:40:02.357139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:02.357143 | orchestrator | Tuesday 13 January 2026 00:40:00 +0000 (0:00:00.218) 0:00:19.486 ******* 2026-01-13 00:40:02.357148 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.357152 | orchestrator | 2026-01-13 00:40:02.357157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:02.357161 | orchestrator | Tuesday 13 January 2026 00:40:00 +0000 (0:00:00.201) 0:00:19.687 ******* 2026-01-13 00:40:02.357166 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.357170 | orchestrator | 2026-01-13 00:40:02.357175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:02.357179 | orchestrator | Tuesday 13 January 2026 00:40:00 +0000 (0:00:00.195) 0:00:19.883 ******* 2026-01-13 00:40:02.357184 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.357188 | orchestrator | 2026-01-13 00:40:02.357193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:02.357197 | orchestrator | Tuesday 13 January 2026 00:40:01 +0000 (0:00:00.208) 0:00:20.092 ******* 2026-01-13 00:40:02.357201 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.357206 | orchestrator | 2026-01-13 00:40:02.357210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:02.357215 | orchestrator | Tuesday 13 January 2026 00:40:01 +0000 (0:00:00.190) 0:00:20.282 ******* 2026-01-13 00:40:02.357222 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:02.357227 | orchestrator | 2026-01-13 00:40:02.357231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:02.357236 | orchestrator | Tuesday 13 January 2026 00:40:01 +0000 (0:00:00.187) 0:00:20.470 ******* 2026-01-13 00:40:02.357240 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-13 00:40:02.357246 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-13 00:40:02.357250 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-13 00:40:02.357255 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-13 00:40:02.357259 | orchestrator | 2026-01-13 00:40:02.357264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:02.357268 | orchestrator | Tuesday 13 January 2026 00:40:02 +0000 (0:00:00.686) 0:00:21.157 ******* 2026-01-13 00:40:02.357273 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.516393 | orchestrator | 2026-01-13 00:40:07.516543 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:07.516567 | orchestrator | Tuesday 13 January 2026 00:40:02 +0000 (0:00:00.156) 0:00:21.314 ******* 2026-01-13 00:40:07.516582 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.516596 | orchestrator | 2026-01-13 00:40:07.516611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:07.516651 | orchestrator | Tuesday 13 January 2026 00:40:02 +0000 (0:00:00.166) 0:00:21.480 ******* 2026-01-13 00:40:07.516667 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.516709 | orchestrator | 2026-01-13 00:40:07.516723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:07.516737 | orchestrator | Tuesday 13 January 2026 00:40:02 +0000 (0:00:00.177) 0:00:21.657 ******* 2026-01-13 00:40:07.516749 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.516762 | orchestrator | 2026-01-13 00:40:07.516776 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-13 00:40:07.516788 | orchestrator | Tuesday 13 January 2026 00:40:03 +0000 (0:00:00.481) 0:00:22.139 ******* 2026-01-13 00:40:07.516802 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-13 00:40:07.516814 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-13 00:40:07.516828 | orchestrator | 2026-01-13 00:40:07.516840 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-13 00:40:07.516853 | orchestrator | Tuesday 13 January 2026 00:40:03 +0000 (0:00:00.110) 0:00:22.249 ******* 2026-01-13 00:40:07.516866 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.516879 | orchestrator | 2026-01-13 00:40:07.516894 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-13 00:40:07.516909 | orchestrator | Tuesday 13 January 2026 00:40:03 +0000 (0:00:00.116) 0:00:22.365 ******* 2026-01-13 00:40:07.516923 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.516937 | orchestrator | 2026-01-13 00:40:07.516951 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-13 00:40:07.516963 | orchestrator | Tuesday 13 January 2026 00:40:03 +0000 (0:00:00.103) 0:00:22.469 ******* 2026-01-13 00:40:07.516977 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.516989 | orchestrator | 2026-01-13 00:40:07.517002 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-13 00:40:07.517016 | orchestrator | Tuesday 13 January 2026 00:40:03 +0000 (0:00:00.105) 0:00:22.575 ******* 2026-01-13 00:40:07.517030 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:40:07.517044 | orchestrator | 2026-01-13 00:40:07.517058 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-13 00:40:07.517071 | orchestrator | Tuesday 13 January 2026 00:40:03 +0000 (0:00:00.106) 0:00:22.681 ******* 2026-01-13 00:40:07.517086 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'}}) 2026-01-13 00:40:07.517099 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2b3e8737-91e3-53c0-9b3a-5288a4111b63'}}) 2026-01-13 00:40:07.517142 | orchestrator | 2026-01-13 00:40:07.517155 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-13 00:40:07.517169 | orchestrator | Tuesday 13 January 2026 00:40:03 +0000 (0:00:00.133) 0:00:22.815 ******* 2026-01-13 00:40:07.517183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'}})  2026-01-13 00:40:07.517199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2b3e8737-91e3-53c0-9b3a-5288a4111b63'}})  2026-01-13 00:40:07.517212 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.517226 | orchestrator | 2026-01-13 00:40:07.517239 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-13 00:40:07.517252 | orchestrator | Tuesday 13 January 2026 00:40:03 +0000 (0:00:00.116) 0:00:22.932 ******* 2026-01-13 00:40:07.517266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'}})  2026-01-13 00:40:07.517279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2b3e8737-91e3-53c0-9b3a-5288a4111b63'}})  2026-01-13 00:40:07.517293 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.517305 | orchestrator | 2026-01-13 00:40:07.517318 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-13 00:40:07.517331 | orchestrator | Tuesday 13 January 2026 00:40:04 +0000 (0:00:00.121) 0:00:23.054 ******* 2026-01-13 00:40:07.517345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'}})  2026-01-13 00:40:07.517360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2b3e8737-91e3-53c0-9b3a-5288a4111b63'}})  2026-01-13 00:40:07.517372 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.517385 | orchestrator | 2026-01-13 00:40:07.517400 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-13 00:40:07.517413 | orchestrator | Tuesday 13 January 2026 00:40:04 +0000 (0:00:00.114) 0:00:23.168 ******* 2026-01-13 00:40:07.517426 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:40:07.517441 | orchestrator | 2026-01-13 00:40:07.517454 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-13 00:40:07.517467 | orchestrator | Tuesday 13 January 2026 00:40:04 +0000 (0:00:00.105) 0:00:23.274 ******* 2026-01-13 00:40:07.517481 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:40:07.517493 | orchestrator | 2026-01-13 00:40:07.517507 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-13 00:40:07.517520 | orchestrator | Tuesday 13 January 2026 00:40:04 +0000 (0:00:00.102) 0:00:23.376 ******* 2026-01-13 00:40:07.517563 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.517579 | orchestrator | 2026-01-13 00:40:07.517591 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-13 00:40:07.517605 | orchestrator | Tuesday 13 January 2026 00:40:04 +0000 (0:00:00.237) 0:00:23.614 ******* 2026-01-13 00:40:07.517618 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.517633 | orchestrator | 2026-01-13 00:40:07.517646 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-13 00:40:07.517659 | orchestrator | Tuesday 13 January 2026 00:40:04 +0000 (0:00:00.136) 0:00:23.750 ******* 2026-01-13 00:40:07.517673 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.517715 | orchestrator | 2026-01-13 00:40:07.517728 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-13 00:40:07.517742 | orchestrator | Tuesday 13 January 2026 00:40:04 +0000 (0:00:00.099) 0:00:23.849 ******* 2026-01-13 00:40:07.517754 | orchestrator | ok: [testbed-node-4] => { 2026-01-13 00:40:07.517765 | orchestrator |  "ceph_osd_devices": { 2026-01-13 00:40:07.517778 | orchestrator |  "sdb": { 2026-01-13 00:40:07.517791 | orchestrator |  "osd_lvm_uuid": "11aa5137-b5aa-5373-b4c1-0bd5a429c1a5" 2026-01-13 00:40:07.517820 | orchestrator |  }, 2026-01-13 00:40:07.517834 | orchestrator |  "sdc": { 2026-01-13 00:40:07.517860 | orchestrator |  "osd_lvm_uuid": "2b3e8737-91e3-53c0-9b3a-5288a4111b63" 2026-01-13 00:40:07.517873 | orchestrator |  } 2026-01-13 00:40:07.517885 | orchestrator |  } 2026-01-13 00:40:07.517900 | orchestrator | } 2026-01-13 00:40:07.517913 | orchestrator | 2026-01-13 00:40:07.517926 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-13 00:40:07.517939 | orchestrator | Tuesday 13 January 2026 00:40:05 +0000 (0:00:00.142) 0:00:23.993 ******* 2026-01-13 00:40:07.517952 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.517965 | orchestrator | 2026-01-13 00:40:07.517981 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-13 00:40:07.517995 | orchestrator | Tuesday 13 January 2026 00:40:05 +0000 (0:00:00.099) 0:00:24.092 ******* 2026-01-13 00:40:07.518008 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.518096 | orchestrator | 2026-01-13 00:40:07.518112 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-13 00:40:07.518127 | orchestrator | Tuesday 13 January 2026 00:40:05 +0000 (0:00:00.109) 0:00:24.202 ******* 2026-01-13 00:40:07.518142 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:40:07.518156 | orchestrator | 2026-01-13 00:40:07.518171 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-13 00:40:07.518184 | orchestrator | Tuesday 13 January 2026 00:40:05 +0000 (0:00:00.102) 0:00:24.304 ******* 2026-01-13 00:40:07.518199 | orchestrator | changed: [testbed-node-4] => { 2026-01-13 00:40:07.518214 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-13 00:40:07.518229 | orchestrator |  "ceph_osd_devices": { 2026-01-13 00:40:07.518254 | orchestrator |  "sdb": { 2026-01-13 00:40:07.518265 | orchestrator |  "osd_lvm_uuid": "11aa5137-b5aa-5373-b4c1-0bd5a429c1a5" 2026-01-13 00:40:07.518276 | orchestrator |  }, 2026-01-13 00:40:07.518288 | orchestrator |  "sdc": { 2026-01-13 00:40:07.518300 | orchestrator |  "osd_lvm_uuid": "2b3e8737-91e3-53c0-9b3a-5288a4111b63" 2026-01-13 00:40:07.518312 | orchestrator |  } 2026-01-13 00:40:07.518324 | orchestrator |  }, 2026-01-13 00:40:07.518337 | orchestrator |  "lvm_volumes": [ 2026-01-13 00:40:07.518350 | orchestrator |  { 2026-01-13 00:40:07.518363 | orchestrator |  "data": "osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5", 2026-01-13 00:40:07.518376 | orchestrator |  "data_vg": "ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5" 2026-01-13 00:40:07.518389 | orchestrator |  }, 2026-01-13 00:40:07.518401 | orchestrator |  { 2026-01-13 00:40:07.518414 | orchestrator |  "data": "osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63", 2026-01-13 00:40:07.518426 | orchestrator |  "data_vg": "ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63" 2026-01-13 00:40:07.518438 | orchestrator |  } 2026-01-13 00:40:07.518451 | orchestrator |  ] 2026-01-13 00:40:07.518463 | orchestrator |  } 2026-01-13 00:40:07.518476 | orchestrator | } 2026-01-13 00:40:07.518488 | orchestrator | 2026-01-13 00:40:07.518500 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-13 00:40:07.518513 | orchestrator | Tuesday 13 January 2026 00:40:05 +0000 (0:00:00.180) 0:00:24.484 ******* 2026-01-13 00:40:07.518526 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-13 00:40:07.518536 | orchestrator | 2026-01-13 00:40:07.518548 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-13 00:40:07.518559 | orchestrator | 2026-01-13 00:40:07.518570 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-13 00:40:07.518582 | orchestrator | Tuesday 13 January 2026 00:40:06 +0000 (0:00:00.995) 0:00:25.480 ******* 2026-01-13 00:40:07.518593 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-13 00:40:07.518604 | orchestrator | 2026-01-13 00:40:07.518616 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-13 00:40:07.518639 | orchestrator | Tuesday 13 January 2026 00:40:07 +0000 (0:00:00.482) 0:00:25.963 ******* 2026-01-13 00:40:07.518651 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:40:07.518662 | orchestrator | 2026-01-13 00:40:07.518674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:07.518707 | orchestrator | Tuesday 13 January 2026 00:40:07 +0000 (0:00:00.194) 0:00:26.157 ******* 2026-01-13 00:40:07.518719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-13 00:40:07.518731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-13 00:40:07.518742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-13 00:40:07.518756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-13 00:40:07.518768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-13 00:40:07.518796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-13 00:40:14.298310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-13 00:40:14.298423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-13 00:40:14.298438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-13 00:40:14.298449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-13 00:40:14.298459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-13 00:40:14.298468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-13 00:40:14.298478 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-13 00:40:14.298488 | orchestrator | 2026-01-13 00:40:14.298499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:14.298510 | orchestrator | Tuesday 13 January 2026 00:40:07 +0000 (0:00:00.307) 0:00:26.464 ******* 2026-01-13 00:40:14.298520 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.298530 | orchestrator | 2026-01-13 00:40:14.298540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:14.298549 | orchestrator | Tuesday 13 January 2026 00:40:07 +0000 (0:00:00.175) 0:00:26.640 ******* 2026-01-13 00:40:14.298559 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.298568 | orchestrator | 2026-01-13 00:40:14.298578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:14.298587 | orchestrator | Tuesday 13 January 2026 00:40:07 +0000 (0:00:00.176) 0:00:26.817 ******* 2026-01-13 00:40:14.298597 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.298606 | orchestrator | 2026-01-13 00:40:14.298616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:14.298625 | orchestrator | Tuesday 13 January 2026 00:40:08 +0000 (0:00:00.180) 0:00:26.997 ******* 2026-01-13 00:40:14.298635 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.298644 | orchestrator | 2026-01-13 00:40:14.298654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:14.298663 | orchestrator | Tuesday 13 January 2026 00:40:08 +0000 (0:00:00.170) 0:00:27.168 ******* 2026-01-13 00:40:14.298673 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.298716 | orchestrator | 2026-01-13 00:40:14.298727 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:14.298736 | orchestrator | Tuesday 13 January 2026 00:40:08 +0000 (0:00:00.187) 0:00:27.355 ******* 2026-01-13 00:40:14.298746 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.298756 | orchestrator | 2026-01-13 00:40:14.298784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:14.298814 | orchestrator | Tuesday 13 January 2026 00:40:08 +0000 (0:00:00.166) 0:00:27.522 ******* 2026-01-13 00:40:14.298824 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.298834 | orchestrator | 2026-01-13 00:40:14.298845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:14.298856 | orchestrator | Tuesday 13 January 2026 00:40:08 +0000 (0:00:00.153) 0:00:27.676 ******* 2026-01-13 00:40:14.298867 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.298877 | orchestrator | 2026-01-13 00:40:14.298888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:14.298899 | orchestrator | Tuesday 13 January 2026 00:40:08 +0000 (0:00:00.169) 0:00:27.846 ******* 2026-01-13 00:40:14.298910 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569) 2026-01-13 00:40:14.298922 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569) 2026-01-13 00:40:14.298933 | orchestrator | 2026-01-13 00:40:14.298944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:14.298955 | orchestrator | Tuesday 13 January 2026 00:40:09 +0000 (0:00:00.537) 0:00:28.383 ******* 2026-01-13 00:40:14.298966 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_79922d84-0445-4535-976b-32e74e35a748) 2026-01-13 00:40:14.298976 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_79922d84-0445-4535-976b-32e74e35a748) 2026-01-13 00:40:14.298986 | orchestrator | 2026-01-13 00:40:14.298997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:14.299008 | orchestrator | Tuesday 13 January 2026 00:40:09 +0000 (0:00:00.365) 0:00:28.749 ******* 2026-01-13 00:40:14.299018 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f69e02e7-d854-4ded-bb8d-51d0e0400336) 2026-01-13 00:40:14.299030 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f69e02e7-d854-4ded-bb8d-51d0e0400336) 2026-01-13 00:40:14.299041 | orchestrator | 2026-01-13 00:40:14.299052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:14.299063 | orchestrator | Tuesday 13 January 2026 00:40:10 +0000 (0:00:00.400) 0:00:29.149 ******* 2026-01-13 00:40:14.299072 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5295d09e-fddd-4452-8a25-9ba23e2b95ae) 2026-01-13 00:40:14.299081 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5295d09e-fddd-4452-8a25-9ba23e2b95ae) 2026-01-13 00:40:14.299091 | orchestrator | 2026-01-13 00:40:14.299100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:40:14.299109 | orchestrator | Tuesday 13 January 2026 00:40:10 +0000 (0:00:00.469) 0:00:29.618 ******* 2026-01-13 00:40:14.299119 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-13 00:40:14.299128 | orchestrator | 2026-01-13 00:40:14.299138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:14.299165 | orchestrator | Tuesday 13 January 2026 00:40:10 +0000 (0:00:00.318) 0:00:29.937 ******* 2026-01-13 00:40:14.299176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-13 00:40:14.299185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-13 00:40:14.299195 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-13 00:40:14.299204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-13 00:40:14.299214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-13 00:40:14.299223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-13 00:40:14.299233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-13 00:40:14.299243 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-13 00:40:14.299260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-13 00:40:14.299270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-13 00:40:14.299279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-13 00:40:14.299289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-13 00:40:14.299298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-13 00:40:14.299308 | orchestrator | 2026-01-13 00:40:14.299317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:14.299327 | orchestrator | Tuesday 13 January 2026 00:40:11 +0000 (0:00:00.383) 0:00:30.320 ******* 2026-01-13 00:40:14.299337 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.299346 | orchestrator | 2026-01-13 00:40:14.299356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:14.299365 | orchestrator | Tuesday 13 January 2026 00:40:11 +0000 (0:00:00.181) 0:00:30.502 ******* 2026-01-13 00:40:14.299374 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.299384 | orchestrator | 2026-01-13 00:40:14.299393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:14.299403 | orchestrator | Tuesday 13 January 2026 00:40:11 +0000 (0:00:00.203) 0:00:30.706 ******* 2026-01-13 00:40:14.299412 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.299422 | orchestrator | 2026-01-13 00:40:14.299432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:14.299441 | orchestrator | Tuesday 13 January 2026 00:40:11 +0000 (0:00:00.180) 0:00:30.886 ******* 2026-01-13 00:40:14.299450 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.299460 | orchestrator | 2026-01-13 00:40:14.299469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:14.299479 | orchestrator | Tuesday 13 January 2026 00:40:12 +0000 (0:00:00.181) 0:00:31.068 ******* 2026-01-13 00:40:14.299488 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.299498 | orchestrator | 2026-01-13 00:40:14.299507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:14.299517 | orchestrator | Tuesday 13 January 2026 00:40:12 +0000 (0:00:00.160) 0:00:31.228 ******* 2026-01-13 00:40:14.299526 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.299536 | orchestrator | 2026-01-13 00:40:14.299545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:14.299555 | orchestrator | Tuesday 13 January 2026 00:40:12 +0000 (0:00:00.393) 0:00:31.621 ******* 2026-01-13 00:40:14.299564 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.299574 | orchestrator | 2026-01-13 00:40:14.299583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:14.299593 | orchestrator | Tuesday 13 January 2026 00:40:12 +0000 (0:00:00.172) 0:00:31.794 ******* 2026-01-13 00:40:14.299602 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.299612 | orchestrator | 2026-01-13 00:40:14.299621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:14.299631 | orchestrator | Tuesday 13 January 2026 00:40:12 +0000 (0:00:00.165) 0:00:31.959 ******* 2026-01-13 00:40:14.299640 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-13 00:40:14.299650 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-13 00:40:14.299660 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-13 00:40:14.299670 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-13 00:40:14.299679 | orchestrator | 2026-01-13 00:40:14.299731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:14.299740 | orchestrator | Tuesday 13 January 2026 00:40:13 +0000 (0:00:00.554) 0:00:32.514 ******* 2026-01-13 00:40:14.299750 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.299766 | orchestrator | 2026-01-13 00:40:14.299776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:14.299791 | orchestrator | Tuesday 13 January 2026 00:40:13 +0000 (0:00:00.166) 0:00:32.681 ******* 2026-01-13 00:40:14.299801 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.299811 | orchestrator | 2026-01-13 00:40:14.299820 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:14.299830 | orchestrator | Tuesday 13 January 2026 00:40:13 +0000 (0:00:00.167) 0:00:32.848 ******* 2026-01-13 00:40:14.299839 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.299849 | orchestrator | 2026-01-13 00:40:14.299858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:40:14.299868 | orchestrator | Tuesday 13 January 2026 00:40:14 +0000 (0:00:00.173) 0:00:33.021 ******* 2026-01-13 00:40:14.299877 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:14.299886 | orchestrator | 2026-01-13 00:40:14.299902 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-13 00:40:17.798763 | orchestrator | Tuesday 13 January 2026 00:40:14 +0000 (0:00:00.230) 0:00:33.252 ******* 2026-01-13 00:40:17.798874 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-13 00:40:17.798892 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-13 00:40:17.798903 | orchestrator | 2026-01-13 00:40:17.798913 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-13 00:40:17.798925 | orchestrator | Tuesday 13 January 2026 00:40:14 +0000 (0:00:00.191) 0:00:33.443 ******* 2026-01-13 00:40:17.798937 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:17.798949 | orchestrator | 2026-01-13 00:40:17.798959 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-13 00:40:17.798971 | orchestrator | Tuesday 13 January 2026 00:40:14 +0000 (0:00:00.139) 0:00:33.582 ******* 2026-01-13 00:40:17.798981 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:17.798993 | orchestrator | 2026-01-13 00:40:17.799004 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-13 00:40:17.799012 | orchestrator | Tuesday 13 January 2026 00:40:14 +0000 (0:00:00.131) 0:00:33.714 ******* 2026-01-13 00:40:17.799024 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:17.799035 | orchestrator | 2026-01-13 00:40:17.799046 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-13 00:40:17.799057 | orchestrator | Tuesday 13 January 2026 00:40:14 +0000 (0:00:00.236) 0:00:33.950 ******* 2026-01-13 00:40:17.799066 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:40:17.799077 | orchestrator | 2026-01-13 00:40:17.799090 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-13 00:40:17.799102 | orchestrator | Tuesday 13 January 2026 00:40:15 +0000 (0:00:00.109) 0:00:34.059 ******* 2026-01-13 00:40:17.799113 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e91d200a-cf56-55df-b2f8-08f15361112f'}}) 2026-01-13 00:40:17.799122 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7ebda4f6-7b50-59b0-8273-b291dd7d1677'}}) 2026-01-13 00:40:17.799131 | orchestrator | 2026-01-13 00:40:17.799142 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-13 00:40:17.799154 | orchestrator | Tuesday 13 January 2026 00:40:15 +0000 (0:00:00.131) 0:00:34.190 ******* 2026-01-13 00:40:17.799166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e91d200a-cf56-55df-b2f8-08f15361112f'}})  2026-01-13 00:40:17.799199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7ebda4f6-7b50-59b0-8273-b291dd7d1677'}})  2026-01-13 00:40:17.799211 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:17.799220 | orchestrator | 2026-01-13 00:40:17.799228 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-13 00:40:17.799240 | orchestrator | Tuesday 13 January 2026 00:40:15 +0000 (0:00:00.121) 0:00:34.312 ******* 2026-01-13 00:40:17.799274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e91d200a-cf56-55df-b2f8-08f15361112f'}})  2026-01-13 00:40:17.799284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7ebda4f6-7b50-59b0-8273-b291dd7d1677'}})  2026-01-13 00:40:17.799297 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:17.799308 | orchestrator | 2026-01-13 00:40:17.799324 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-13 00:40:17.799333 | orchestrator | Tuesday 13 January 2026 00:40:15 +0000 (0:00:00.126) 0:00:34.438 ******* 2026-01-13 00:40:17.799345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e91d200a-cf56-55df-b2f8-08f15361112f'}})  2026-01-13 00:40:17.799358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7ebda4f6-7b50-59b0-8273-b291dd7d1677'}})  2026-01-13 00:40:17.799374 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:17.799384 | orchestrator | 2026-01-13 00:40:17.799396 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-13 00:40:17.799408 | orchestrator | Tuesday 13 January 2026 00:40:15 +0000 (0:00:00.136) 0:00:34.575 ******* 2026-01-13 00:40:17.799420 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:40:17.799431 | orchestrator | 2026-01-13 00:40:17.799441 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-13 00:40:17.799455 | orchestrator | Tuesday 13 January 2026 00:40:15 +0000 (0:00:00.132) 0:00:34.707 ******* 2026-01-13 00:40:17.799471 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:40:17.799482 | orchestrator | 2026-01-13 00:40:17.799490 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-13 00:40:17.799498 | orchestrator | Tuesday 13 January 2026 00:40:15 +0000 (0:00:00.109) 0:00:34.816 ******* 2026-01-13 00:40:17.799507 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:17.799515 | orchestrator | 2026-01-13 00:40:17.799522 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-13 00:40:17.799530 | orchestrator | Tuesday 13 January 2026 00:40:15 +0000 (0:00:00.101) 0:00:34.918 ******* 2026-01-13 00:40:17.799539 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:17.799554 | orchestrator | 2026-01-13 00:40:17.799568 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-13 00:40:17.799576 | orchestrator | Tuesday 13 January 2026 00:40:16 +0000 (0:00:00.131) 0:00:35.049 ******* 2026-01-13 00:40:17.799585 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:17.799595 | orchestrator | 2026-01-13 00:40:17.799610 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-13 00:40:17.799623 | orchestrator | Tuesday 13 January 2026 00:40:16 +0000 (0:00:00.112) 0:00:35.162 ******* 2026-01-13 00:40:17.799632 | orchestrator | ok: [testbed-node-5] => { 2026-01-13 00:40:17.799644 | orchestrator |  "ceph_osd_devices": { 2026-01-13 00:40:17.799654 | orchestrator |  "sdb": { 2026-01-13 00:40:17.799752 | orchestrator |  "osd_lvm_uuid": "e91d200a-cf56-55df-b2f8-08f15361112f" 2026-01-13 00:40:17.799767 | orchestrator |  }, 2026-01-13 00:40:17.799778 | orchestrator |  "sdc": { 2026-01-13 00:40:17.799786 | orchestrator |  "osd_lvm_uuid": "7ebda4f6-7b50-59b0-8273-b291dd7d1677" 2026-01-13 00:40:17.799799 | orchestrator |  } 2026-01-13 00:40:17.799811 | orchestrator |  } 2026-01-13 00:40:17.799821 | orchestrator | } 2026-01-13 00:40:17.799830 | orchestrator | 2026-01-13 00:40:17.799841 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-13 00:40:17.799851 | orchestrator | Tuesday 13 January 2026 00:40:16 +0000 (0:00:00.120) 0:00:35.282 ******* 2026-01-13 00:40:17.799861 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:17.799870 | orchestrator | 2026-01-13 00:40:17.799880 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-13 00:40:17.799889 | orchestrator | Tuesday 13 January 2026 00:40:16 +0000 (0:00:00.232) 0:00:35.514 ******* 2026-01-13 00:40:17.799935 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:17.799947 | orchestrator | 2026-01-13 00:40:17.799957 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-13 00:40:17.799968 | orchestrator | Tuesday 13 January 2026 00:40:16 +0000 (0:00:00.100) 0:00:35.614 ******* 2026-01-13 00:40:17.799980 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:40:17.799988 | orchestrator | 2026-01-13 00:40:17.799999 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-13 00:40:17.800011 | orchestrator | Tuesday 13 January 2026 00:40:16 +0000 (0:00:00.100) 0:00:35.715 ******* 2026-01-13 00:40:17.800023 | orchestrator | changed: [testbed-node-5] => { 2026-01-13 00:40:17.800036 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-13 00:40:17.800044 | orchestrator |  "ceph_osd_devices": { 2026-01-13 00:40:17.800056 | orchestrator |  "sdb": { 2026-01-13 00:40:17.800067 | orchestrator |  "osd_lvm_uuid": "e91d200a-cf56-55df-b2f8-08f15361112f" 2026-01-13 00:40:17.800079 | orchestrator |  }, 2026-01-13 00:40:17.800090 | orchestrator |  "sdc": { 2026-01-13 00:40:17.800099 | orchestrator |  "osd_lvm_uuid": "7ebda4f6-7b50-59b0-8273-b291dd7d1677" 2026-01-13 00:40:17.800110 | orchestrator |  } 2026-01-13 00:40:17.800122 | orchestrator |  }, 2026-01-13 00:40:17.800132 | orchestrator |  "lvm_volumes": [ 2026-01-13 00:40:17.800142 | orchestrator |  { 2026-01-13 00:40:17.800151 | orchestrator |  "data": "osd-block-e91d200a-cf56-55df-b2f8-08f15361112f", 2026-01-13 00:40:17.800161 | orchestrator |  "data_vg": "ceph-e91d200a-cf56-55df-b2f8-08f15361112f" 2026-01-13 00:40:17.800172 | orchestrator |  }, 2026-01-13 00:40:17.800183 | orchestrator |  { 2026-01-13 00:40:17.800195 | orchestrator |  "data": "osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677", 2026-01-13 00:40:17.800219 | orchestrator |  "data_vg": "ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677" 2026-01-13 00:40:17.800232 | orchestrator |  } 2026-01-13 00:40:17.800248 | orchestrator |  ] 2026-01-13 00:40:17.800256 | orchestrator |  } 2026-01-13 00:40:17.800268 | orchestrator | } 2026-01-13 00:40:17.800277 | orchestrator | 2026-01-13 00:40:17.800289 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-13 00:40:17.800300 | orchestrator | Tuesday 13 January 2026 00:40:16 +0000 (0:00:00.169) 0:00:35.885 ******* 2026-01-13 00:40:17.800309 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-13 00:40:17.800321 | orchestrator | 2026-01-13 00:40:17.800332 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:40:17.800344 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-13 00:40:17.800355 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-13 00:40:17.800363 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-13 00:40:17.800371 | orchestrator | 2026-01-13 00:40:17.800379 | orchestrator | 2026-01-13 00:40:17.800387 | orchestrator | 2026-01-13 00:40:17.800396 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:40:17.800408 | orchestrator | Tuesday 13 January 2026 00:40:17 +0000 (0:00:00.852) 0:00:36.738 ******* 2026-01-13 00:40:17.800420 | orchestrator | =============================================================================== 2026-01-13 00:40:17.800429 | orchestrator | Write configuration file ------------------------------------------------ 3.58s 2026-01-13 00:40:17.800439 | orchestrator | Add known links to the list of available block devices ------------------ 1.11s 2026-01-13 00:40:17.800451 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2026-01-13 00:40:17.800463 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.96s 2026-01-13 00:40:17.800481 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2026-01-13 00:40:17.800494 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2026-01-13 00:40:17.800506 | orchestrator | Print configuration data ------------------------------------------------ 0.73s 2026-01-13 00:40:17.800515 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-01-13 00:40:17.800524 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-01-13 00:40:17.800534 | orchestrator | Get initial list of available block devices ----------------------------- 0.63s 2026-01-13 00:40:17.800545 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2026-01-13 00:40:17.800557 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-01-13 00:40:17.800566 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.56s 2026-01-13 00:40:17.800588 | orchestrator | Add known partitions to the list of available block devices ------------- 0.55s 2026-01-13 00:40:18.160982 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-01-13 00:40:18.161103 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2026-01-13 00:40:18.161127 | orchestrator | Add known partitions to the list of available block devices ------------- 0.48s 2026-01-13 00:40:18.161143 | orchestrator | Set DB devices config data ---------------------------------------------- 0.47s 2026-01-13 00:40:18.161162 | orchestrator | Add known links to the list of available block devices ------------------ 0.47s 2026-01-13 00:40:18.161180 | orchestrator | Print WAL devices ------------------------------------------------------- 0.47s 2026-01-13 00:40:41.066353 | orchestrator | 2026-01-13 00:40:41 | INFO  | Task 5b6e9c71-ef81-48c9-b389-576a03ff85e9 (sync inventory) is running in background. Output coming soon. 2026-01-13 00:41:07.240855 | orchestrator | 2026-01-13 00:40:42 | INFO  | Starting group_vars file reorganization 2026-01-13 00:41:07.240966 | orchestrator | 2026-01-13 00:40:42 | INFO  | Moved 0 file(s) to their respective directories 2026-01-13 00:41:07.240979 | orchestrator | 2026-01-13 00:40:42 | INFO  | Group_vars file reorganization completed 2026-01-13 00:41:07.240988 | orchestrator | 2026-01-13 00:40:45 | INFO  | Starting variable preparation from inventory 2026-01-13 00:41:07.240995 | orchestrator | 2026-01-13 00:40:48 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-13 00:41:07.241003 | orchestrator | 2026-01-13 00:40:48 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-13 00:41:07.241010 | orchestrator | 2026-01-13 00:40:48 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-13 00:41:07.241018 | orchestrator | 2026-01-13 00:40:48 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-13 00:41:07.241026 | orchestrator | 2026-01-13 00:40:48 | INFO  | Variable preparation completed 2026-01-13 00:41:07.241033 | orchestrator | 2026-01-13 00:40:50 | INFO  | Starting inventory overwrite handling 2026-01-13 00:41:07.241041 | orchestrator | 2026-01-13 00:40:50 | INFO  | Handling group overwrites in 99-overwrite 2026-01-13 00:41:07.241048 | orchestrator | 2026-01-13 00:40:50 | INFO  | Removing group frr:children from 60-generic 2026-01-13 00:41:07.241056 | orchestrator | 2026-01-13 00:40:50 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-13 00:41:07.241063 | orchestrator | 2026-01-13 00:40:50 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-13 00:41:07.241071 | orchestrator | 2026-01-13 00:40:50 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-13 00:41:07.241078 | orchestrator | 2026-01-13 00:40:50 | INFO  | Handling group overwrites in 20-roles 2026-01-13 00:41:07.241109 | orchestrator | 2026-01-13 00:40:50 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-13 00:41:07.241117 | orchestrator | 2026-01-13 00:40:50 | INFO  | Removed 5 group(s) in total 2026-01-13 00:41:07.241124 | orchestrator | 2026-01-13 00:40:50 | INFO  | Inventory overwrite handling completed 2026-01-13 00:41:07.241131 | orchestrator | 2026-01-13 00:40:51 | INFO  | Starting merge of inventory files 2026-01-13 00:41:07.241138 | orchestrator | 2026-01-13 00:40:51 | INFO  | Inventory files merged successfully 2026-01-13 00:41:07.241149 | orchestrator | 2026-01-13 00:40:56 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-13 00:41:07.241161 | orchestrator | 2026-01-13 00:41:06 | INFO  | Successfully wrote ClusterShell configuration 2026-01-13 00:41:07.241178 | orchestrator | [master ea9e2af] 2026-01-13-00-41 2026-01-13 00:41:07.241195 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-13 00:41:09.226959 | orchestrator | 2026-01-13 00:41:09 | INFO  | Task a91a25af-a18a-484c-9019-a75474dc5746 (ceph-create-lvm-devices) was prepared for execution. 2026-01-13 00:41:09.227047 | orchestrator | 2026-01-13 00:41:09 | INFO  | It takes a moment until task a91a25af-a18a-484c-9019-a75474dc5746 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-13 00:41:18.906295 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-13 00:41:18.906378 | orchestrator | 2.16.14 2026-01-13 00:41:18.906387 | orchestrator | 2026-01-13 00:41:18.906394 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-13 00:41:18.906401 | orchestrator | 2026-01-13 00:41:18.906407 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-13 00:41:18.906413 | orchestrator | Tuesday 13 January 2026 00:41:12 +0000 (0:00:00.283) 0:00:00.283 ******* 2026-01-13 00:41:18.906419 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-13 00:41:18.906425 | orchestrator | 2026-01-13 00:41:18.906431 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-13 00:41:18.906436 | orchestrator | Tuesday 13 January 2026 00:41:12 +0000 (0:00:00.195) 0:00:00.479 ******* 2026-01-13 00:41:18.906442 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:41:18.906447 | orchestrator | 2026-01-13 00:41:18.906453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:18.906459 | orchestrator | Tuesday 13 January 2026 00:41:12 +0000 (0:00:00.181) 0:00:00.661 ******* 2026-01-13 00:41:18.906465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-13 00:41:18.906470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-13 00:41:18.906475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-13 00:41:18.906481 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-13 00:41:18.906487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-13 00:41:18.906496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-13 00:41:18.906506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-13 00:41:18.906515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-13 00:41:18.906524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-13 00:41:18.906551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-13 00:41:18.906560 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-13 00:41:18.906569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-13 00:41:18.906596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-13 00:41:18.906604 | orchestrator | 2026-01-13 00:41:18.906613 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:18.906623 | orchestrator | Tuesday 13 January 2026 00:41:13 +0000 (0:00:00.412) 0:00:01.073 ******* 2026-01-13 00:41:18.906632 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.906642 | orchestrator | 2026-01-13 00:41:18.906652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:18.906662 | orchestrator | Tuesday 13 January 2026 00:41:13 +0000 (0:00:00.171) 0:00:01.245 ******* 2026-01-13 00:41:18.906672 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.906681 | orchestrator | 2026-01-13 00:41:18.906747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:18.906755 | orchestrator | Tuesday 13 January 2026 00:41:13 +0000 (0:00:00.170) 0:00:01.415 ******* 2026-01-13 00:41:18.906761 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.906766 | orchestrator | 2026-01-13 00:41:18.906771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:18.906777 | orchestrator | Tuesday 13 January 2026 00:41:13 +0000 (0:00:00.184) 0:00:01.600 ******* 2026-01-13 00:41:18.906782 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.906787 | orchestrator | 2026-01-13 00:41:18.906793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:18.906798 | orchestrator | Tuesday 13 January 2026 00:41:14 +0000 (0:00:00.193) 0:00:01.794 ******* 2026-01-13 00:41:18.906803 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.906808 | orchestrator | 2026-01-13 00:41:18.906814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:18.906819 | orchestrator | Tuesday 13 January 2026 00:41:14 +0000 (0:00:00.159) 0:00:01.953 ******* 2026-01-13 00:41:18.906824 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.906830 | orchestrator | 2026-01-13 00:41:18.906835 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:18.906840 | orchestrator | Tuesday 13 January 2026 00:41:14 +0000 (0:00:00.129) 0:00:02.083 ******* 2026-01-13 00:41:18.906845 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.906852 | orchestrator | 2026-01-13 00:41:18.906858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:18.906864 | orchestrator | Tuesday 13 January 2026 00:41:14 +0000 (0:00:00.151) 0:00:02.234 ******* 2026-01-13 00:41:18.906870 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.906875 | orchestrator | 2026-01-13 00:41:18.906882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:18.906887 | orchestrator | Tuesday 13 January 2026 00:41:14 +0000 (0:00:00.162) 0:00:02.396 ******* 2026-01-13 00:41:18.906894 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315) 2026-01-13 00:41:18.906901 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315) 2026-01-13 00:41:18.906907 | orchestrator | 2026-01-13 00:41:18.906913 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:18.906932 | orchestrator | Tuesday 13 January 2026 00:41:15 +0000 (0:00:00.380) 0:00:02.777 ******* 2026-01-13 00:41:18.906939 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_49cd33e4-72cd-4f3f-940d-55c9f0f00a98) 2026-01-13 00:41:18.906945 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_49cd33e4-72cd-4f3f-940d-55c9f0f00a98) 2026-01-13 00:41:18.906952 | orchestrator | 2026-01-13 00:41:18.906958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:18.906964 | orchestrator | Tuesday 13 January 2026 00:41:15 +0000 (0:00:00.570) 0:00:03.347 ******* 2026-01-13 00:41:18.906970 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1f00cc32-4927-4d99-9c1e-b649b1d1f573) 2026-01-13 00:41:18.906982 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1f00cc32-4927-4d99-9c1e-b649b1d1f573) 2026-01-13 00:41:18.906988 | orchestrator | 2026-01-13 00:41:18.906994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:18.907000 | orchestrator | Tuesday 13 January 2026 00:41:16 +0000 (0:00:00.512) 0:00:03.860 ******* 2026-01-13 00:41:18.907006 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0a292857-8cd9-4a14-95ba-a5d022f4a90e) 2026-01-13 00:41:18.907013 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0a292857-8cd9-4a14-95ba-a5d022f4a90e) 2026-01-13 00:41:18.907019 | orchestrator | 2026-01-13 00:41:18.907025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:18.907031 | orchestrator | Tuesday 13 January 2026 00:41:16 +0000 (0:00:00.692) 0:00:04.553 ******* 2026-01-13 00:41:18.907038 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-13 00:41:18.907044 | orchestrator | 2026-01-13 00:41:18.907050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:18.907056 | orchestrator | Tuesday 13 January 2026 00:41:17 +0000 (0:00:00.299) 0:00:04.852 ******* 2026-01-13 00:41:18.907062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-13 00:41:18.907068 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-13 00:41:18.907074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-13 00:41:18.907080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-13 00:41:18.907086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-13 00:41:18.907092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-13 00:41:18.907098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-13 00:41:18.907104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-13 00:41:18.907110 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-13 00:41:18.907116 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-13 00:41:18.907123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-13 00:41:18.907132 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-13 00:41:18.907141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-13 00:41:18.907150 | orchestrator | 2026-01-13 00:41:18.907162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:18.907171 | orchestrator | Tuesday 13 January 2026 00:41:17 +0000 (0:00:00.368) 0:00:05.221 ******* 2026-01-13 00:41:18.907180 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.907189 | orchestrator | 2026-01-13 00:41:18.907198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:18.907207 | orchestrator | Tuesday 13 January 2026 00:41:17 +0000 (0:00:00.183) 0:00:05.405 ******* 2026-01-13 00:41:18.907216 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.907225 | orchestrator | 2026-01-13 00:41:18.907238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:18.907247 | orchestrator | Tuesday 13 January 2026 00:41:17 +0000 (0:00:00.179) 0:00:05.584 ******* 2026-01-13 00:41:18.907256 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.907264 | orchestrator | 2026-01-13 00:41:18.907269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:18.907275 | orchestrator | Tuesday 13 January 2026 00:41:18 +0000 (0:00:00.168) 0:00:05.753 ******* 2026-01-13 00:41:18.907285 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.907290 | orchestrator | 2026-01-13 00:41:18.907296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:18.907301 | orchestrator | Tuesday 13 January 2026 00:41:18 +0000 (0:00:00.184) 0:00:05.937 ******* 2026-01-13 00:41:18.907306 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.907312 | orchestrator | 2026-01-13 00:41:18.907317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:18.907322 | orchestrator | Tuesday 13 January 2026 00:41:18 +0000 (0:00:00.211) 0:00:06.148 ******* 2026-01-13 00:41:18.907328 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.907333 | orchestrator | 2026-01-13 00:41:18.907338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:18.907344 | orchestrator | Tuesday 13 January 2026 00:41:18 +0000 (0:00:00.208) 0:00:06.357 ******* 2026-01-13 00:41:18.907349 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:18.907354 | orchestrator | 2026-01-13 00:41:18.907364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:26.794216 | orchestrator | Tuesday 13 January 2026 00:41:18 +0000 (0:00:00.231) 0:00:06.589 ******* 2026-01-13 00:41:26.794297 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794304 | orchestrator | 2026-01-13 00:41:26.794310 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:26.794314 | orchestrator | Tuesday 13 January 2026 00:41:19 +0000 (0:00:00.232) 0:00:06.821 ******* 2026-01-13 00:41:26.794319 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-13 00:41:26.794324 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-13 00:41:26.794331 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-13 00:41:26.794337 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-13 00:41:26.794343 | orchestrator | 2026-01-13 00:41:26.794349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:26.794355 | orchestrator | Tuesday 13 January 2026 00:41:20 +0000 (0:00:01.173) 0:00:07.995 ******* 2026-01-13 00:41:26.794362 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794368 | orchestrator | 2026-01-13 00:41:26.794375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:26.794382 | orchestrator | Tuesday 13 January 2026 00:41:20 +0000 (0:00:00.191) 0:00:08.186 ******* 2026-01-13 00:41:26.794388 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794395 | orchestrator | 2026-01-13 00:41:26.794400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:26.794404 | orchestrator | Tuesday 13 January 2026 00:41:20 +0000 (0:00:00.175) 0:00:08.362 ******* 2026-01-13 00:41:26.794408 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794412 | orchestrator | 2026-01-13 00:41:26.794416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:26.794420 | orchestrator | Tuesday 13 January 2026 00:41:20 +0000 (0:00:00.198) 0:00:08.560 ******* 2026-01-13 00:41:26.794424 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794428 | orchestrator | 2026-01-13 00:41:26.794432 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-13 00:41:26.794436 | orchestrator | Tuesday 13 January 2026 00:41:21 +0000 (0:00:00.202) 0:00:08.763 ******* 2026-01-13 00:41:26.794440 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794443 | orchestrator | 2026-01-13 00:41:26.794447 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-13 00:41:26.794451 | orchestrator | Tuesday 13 January 2026 00:41:21 +0000 (0:00:00.130) 0:00:08.893 ******* 2026-01-13 00:41:26.794468 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b9be54a9-cd9c-568c-9220-61b18da052d9'}}) 2026-01-13 00:41:26.794473 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '03961d85-1922-5669-8251-0ccc6cca9fac'}}) 2026-01-13 00:41:26.794477 | orchestrator | 2026-01-13 00:41:26.794481 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-13 00:41:26.794498 | orchestrator | Tuesday 13 January 2026 00:41:21 +0000 (0:00:00.175) 0:00:09.068 ******* 2026-01-13 00:41:26.794503 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'}) 2026-01-13 00:41:26.794508 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'}) 2026-01-13 00:41:26.794512 | orchestrator | 2026-01-13 00:41:26.794518 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-13 00:41:26.794522 | orchestrator | Tuesday 13 January 2026 00:41:23 +0000 (0:00:01.926) 0:00:10.995 ******* 2026-01-13 00:41:26.794526 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:26.794531 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:26.794534 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794538 | orchestrator | 2026-01-13 00:41:26.794542 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-13 00:41:26.794546 | orchestrator | Tuesday 13 January 2026 00:41:23 +0000 (0:00:00.145) 0:00:11.140 ******* 2026-01-13 00:41:26.794549 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'}) 2026-01-13 00:41:26.794553 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'}) 2026-01-13 00:41:26.794557 | orchestrator | 2026-01-13 00:41:26.794561 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-13 00:41:26.794565 | orchestrator | Tuesday 13 January 2026 00:41:24 +0000 (0:00:01.450) 0:00:12.591 ******* 2026-01-13 00:41:26.794568 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:26.794572 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:26.794576 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794580 | orchestrator | 2026-01-13 00:41:26.794583 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-13 00:41:26.794587 | orchestrator | Tuesday 13 January 2026 00:41:25 +0000 (0:00:00.162) 0:00:12.753 ******* 2026-01-13 00:41:26.794601 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794605 | orchestrator | 2026-01-13 00:41:26.794609 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-13 00:41:26.794612 | orchestrator | Tuesday 13 January 2026 00:41:25 +0000 (0:00:00.127) 0:00:12.880 ******* 2026-01-13 00:41:26.794616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:26.794620 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:26.794624 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794627 | orchestrator | 2026-01-13 00:41:26.794631 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-13 00:41:26.794635 | orchestrator | Tuesday 13 January 2026 00:41:25 +0000 (0:00:00.364) 0:00:13.244 ******* 2026-01-13 00:41:26.794639 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794642 | orchestrator | 2026-01-13 00:41:26.794646 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-13 00:41:26.794650 | orchestrator | Tuesday 13 January 2026 00:41:25 +0000 (0:00:00.126) 0:00:13.371 ******* 2026-01-13 00:41:26.794657 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:26.794661 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:26.794665 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794668 | orchestrator | 2026-01-13 00:41:26.794672 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-13 00:41:26.794676 | orchestrator | Tuesday 13 January 2026 00:41:25 +0000 (0:00:00.135) 0:00:13.507 ******* 2026-01-13 00:41:26.794679 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794683 | orchestrator | 2026-01-13 00:41:26.794687 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-13 00:41:26.794691 | orchestrator | Tuesday 13 January 2026 00:41:25 +0000 (0:00:00.132) 0:00:13.639 ******* 2026-01-13 00:41:26.794694 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:26.794736 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:26.794740 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794744 | orchestrator | 2026-01-13 00:41:26.794747 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-13 00:41:26.794751 | orchestrator | Tuesday 13 January 2026 00:41:26 +0000 (0:00:00.162) 0:00:13.802 ******* 2026-01-13 00:41:26.794755 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:41:26.794759 | orchestrator | 2026-01-13 00:41:26.794762 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-13 00:41:26.794766 | orchestrator | Tuesday 13 January 2026 00:41:26 +0000 (0:00:00.121) 0:00:13.923 ******* 2026-01-13 00:41:26.794773 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:26.794777 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:26.794782 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794786 | orchestrator | 2026-01-13 00:41:26.794790 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-13 00:41:26.794795 | orchestrator | Tuesday 13 January 2026 00:41:26 +0000 (0:00:00.134) 0:00:14.058 ******* 2026-01-13 00:41:26.794799 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:26.794803 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:26.794807 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794812 | orchestrator | 2026-01-13 00:41:26.794816 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-13 00:41:26.794820 | orchestrator | Tuesday 13 January 2026 00:41:26 +0000 (0:00:00.145) 0:00:14.203 ******* 2026-01-13 00:41:26.794825 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:26.794829 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:26.794833 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794837 | orchestrator | 2026-01-13 00:41:26.794842 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-13 00:41:26.794849 | orchestrator | Tuesday 13 January 2026 00:41:26 +0000 (0:00:00.154) 0:00:14.358 ******* 2026-01-13 00:41:26.794854 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:26.794858 | orchestrator | 2026-01-13 00:41:26.794862 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-13 00:41:26.794870 | orchestrator | Tuesday 13 January 2026 00:41:26 +0000 (0:00:00.118) 0:00:14.476 ******* 2026-01-13 00:41:33.605873 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.606009 | orchestrator | 2026-01-13 00:41:33.606122 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-13 00:41:33.606137 | orchestrator | Tuesday 13 January 2026 00:41:26 +0000 (0:00:00.125) 0:00:14.601 ******* 2026-01-13 00:41:33.606149 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.606160 | orchestrator | 2026-01-13 00:41:33.606172 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-13 00:41:33.606182 | orchestrator | Tuesday 13 January 2026 00:41:27 +0000 (0:00:00.113) 0:00:14.715 ******* 2026-01-13 00:41:33.606193 | orchestrator | ok: [testbed-node-3] => { 2026-01-13 00:41:33.606205 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-13 00:41:33.606216 | orchestrator | } 2026-01-13 00:41:33.606227 | orchestrator | 2026-01-13 00:41:33.606238 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-13 00:41:33.606249 | orchestrator | Tuesday 13 January 2026 00:41:27 +0000 (0:00:00.296) 0:00:15.011 ******* 2026-01-13 00:41:33.606260 | orchestrator | ok: [testbed-node-3] => { 2026-01-13 00:41:33.606270 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-13 00:41:33.606281 | orchestrator | } 2026-01-13 00:41:33.606292 | orchestrator | 2026-01-13 00:41:33.606303 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-13 00:41:33.606314 | orchestrator | Tuesday 13 January 2026 00:41:27 +0000 (0:00:00.157) 0:00:15.169 ******* 2026-01-13 00:41:33.606326 | orchestrator | ok: [testbed-node-3] => { 2026-01-13 00:41:33.606337 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-13 00:41:33.606349 | orchestrator | } 2026-01-13 00:41:33.606361 | orchestrator | 2026-01-13 00:41:33.606373 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-13 00:41:33.606385 | orchestrator | Tuesday 13 January 2026 00:41:27 +0000 (0:00:00.159) 0:00:15.328 ******* 2026-01-13 00:41:33.606397 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:41:33.606410 | orchestrator | 2026-01-13 00:41:33.606422 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-13 00:41:33.606434 | orchestrator | Tuesday 13 January 2026 00:41:28 +0000 (0:00:00.601) 0:00:15.930 ******* 2026-01-13 00:41:33.606446 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:41:33.606462 | orchestrator | 2026-01-13 00:41:33.606480 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-13 00:41:33.606495 | orchestrator | Tuesday 13 January 2026 00:41:28 +0000 (0:00:00.509) 0:00:16.440 ******* 2026-01-13 00:41:33.606511 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:41:33.606539 | orchestrator | 2026-01-13 00:41:33.606559 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-13 00:41:33.606577 | orchestrator | Tuesday 13 January 2026 00:41:29 +0000 (0:00:00.525) 0:00:16.965 ******* 2026-01-13 00:41:33.606594 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:41:33.606612 | orchestrator | 2026-01-13 00:41:33.606631 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-13 00:41:33.606649 | orchestrator | Tuesday 13 January 2026 00:41:29 +0000 (0:00:00.127) 0:00:17.093 ******* 2026-01-13 00:41:33.606666 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.606684 | orchestrator | 2026-01-13 00:41:33.606725 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-13 00:41:33.606745 | orchestrator | Tuesday 13 January 2026 00:41:29 +0000 (0:00:00.126) 0:00:17.219 ******* 2026-01-13 00:41:33.606763 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.606781 | orchestrator | 2026-01-13 00:41:33.606799 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-13 00:41:33.606850 | orchestrator | Tuesday 13 January 2026 00:41:29 +0000 (0:00:00.122) 0:00:17.341 ******* 2026-01-13 00:41:33.606871 | orchestrator | ok: [testbed-node-3] => { 2026-01-13 00:41:33.606890 | orchestrator |  "vgs_report": { 2026-01-13 00:41:33.606908 | orchestrator |  "vg": [] 2026-01-13 00:41:33.606925 | orchestrator |  } 2026-01-13 00:41:33.606943 | orchestrator | } 2026-01-13 00:41:33.606961 | orchestrator | 2026-01-13 00:41:33.606979 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-13 00:41:33.606998 | orchestrator | Tuesday 13 January 2026 00:41:29 +0000 (0:00:00.127) 0:00:17.469 ******* 2026-01-13 00:41:33.607015 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607026 | orchestrator | 2026-01-13 00:41:33.607036 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-13 00:41:33.607069 | orchestrator | Tuesday 13 January 2026 00:41:29 +0000 (0:00:00.133) 0:00:17.602 ******* 2026-01-13 00:41:33.607080 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607091 | orchestrator | 2026-01-13 00:41:33.607102 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-13 00:41:33.607112 | orchestrator | Tuesday 13 January 2026 00:41:30 +0000 (0:00:00.157) 0:00:17.760 ******* 2026-01-13 00:41:33.607123 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607133 | orchestrator | 2026-01-13 00:41:33.607144 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-13 00:41:33.607154 | orchestrator | Tuesday 13 January 2026 00:41:30 +0000 (0:00:00.462) 0:00:18.223 ******* 2026-01-13 00:41:33.607165 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607175 | orchestrator | 2026-01-13 00:41:33.607187 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-13 00:41:33.607198 | orchestrator | Tuesday 13 January 2026 00:41:30 +0000 (0:00:00.199) 0:00:18.422 ******* 2026-01-13 00:41:33.607208 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607219 | orchestrator | 2026-01-13 00:41:33.607229 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-13 00:41:33.607240 | orchestrator | Tuesday 13 January 2026 00:41:30 +0000 (0:00:00.161) 0:00:18.583 ******* 2026-01-13 00:41:33.607250 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607260 | orchestrator | 2026-01-13 00:41:33.607271 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-13 00:41:33.607281 | orchestrator | Tuesday 13 January 2026 00:41:31 +0000 (0:00:00.138) 0:00:18.722 ******* 2026-01-13 00:41:33.607292 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607302 | orchestrator | 2026-01-13 00:41:33.607313 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-13 00:41:33.607324 | orchestrator | Tuesday 13 January 2026 00:41:31 +0000 (0:00:00.152) 0:00:18.874 ******* 2026-01-13 00:41:33.607357 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607369 | orchestrator | 2026-01-13 00:41:33.607379 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-13 00:41:33.607390 | orchestrator | Tuesday 13 January 2026 00:41:31 +0000 (0:00:00.185) 0:00:19.060 ******* 2026-01-13 00:41:33.607401 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607411 | orchestrator | 2026-01-13 00:41:33.607421 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-13 00:41:33.607432 | orchestrator | Tuesday 13 January 2026 00:41:31 +0000 (0:00:00.142) 0:00:19.202 ******* 2026-01-13 00:41:33.607443 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607453 | orchestrator | 2026-01-13 00:41:33.607464 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-13 00:41:33.607475 | orchestrator | Tuesday 13 January 2026 00:41:31 +0000 (0:00:00.142) 0:00:19.345 ******* 2026-01-13 00:41:33.607485 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607516 | orchestrator | 2026-01-13 00:41:33.607547 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-13 00:41:33.607566 | orchestrator | Tuesday 13 January 2026 00:41:31 +0000 (0:00:00.157) 0:00:19.502 ******* 2026-01-13 00:41:33.607598 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607618 | orchestrator | 2026-01-13 00:41:33.607636 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-13 00:41:33.607656 | orchestrator | Tuesday 13 January 2026 00:41:31 +0000 (0:00:00.144) 0:00:19.646 ******* 2026-01-13 00:41:33.607675 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607694 | orchestrator | 2026-01-13 00:41:33.607784 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-13 00:41:33.607804 | orchestrator | Tuesday 13 January 2026 00:41:32 +0000 (0:00:00.152) 0:00:19.799 ******* 2026-01-13 00:41:33.607821 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607840 | orchestrator | 2026-01-13 00:41:33.607856 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-13 00:41:33.607874 | orchestrator | Tuesday 13 January 2026 00:41:32 +0000 (0:00:00.158) 0:00:19.957 ******* 2026-01-13 00:41:33.607893 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:33.607913 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:33.607931 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.607951 | orchestrator | 2026-01-13 00:41:33.607970 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-13 00:41:33.607988 | orchestrator | Tuesday 13 January 2026 00:41:32 +0000 (0:00:00.486) 0:00:20.443 ******* 2026-01-13 00:41:33.608006 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:33.608025 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:33.608043 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.608061 | orchestrator | 2026-01-13 00:41:33.608079 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-13 00:41:33.608107 | orchestrator | Tuesday 13 January 2026 00:41:32 +0000 (0:00:00.172) 0:00:20.616 ******* 2026-01-13 00:41:33.608126 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:33.608144 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:33.608161 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.608179 | orchestrator | 2026-01-13 00:41:33.608196 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-13 00:41:33.608215 | orchestrator | Tuesday 13 January 2026 00:41:33 +0000 (0:00:00.177) 0:00:20.794 ******* 2026-01-13 00:41:33.608233 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:33.608252 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:33.608271 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.608289 | orchestrator | 2026-01-13 00:41:33.608307 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-13 00:41:33.608325 | orchestrator | Tuesday 13 January 2026 00:41:33 +0000 (0:00:00.173) 0:00:20.968 ******* 2026-01-13 00:41:33.608343 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:33.608362 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:33.608461 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:33.608480 | orchestrator | 2026-01-13 00:41:33.608498 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-13 00:41:33.608516 | orchestrator | Tuesday 13 January 2026 00:41:33 +0000 (0:00:00.161) 0:00:21.129 ******* 2026-01-13 00:41:33.608552 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:39.293481 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:39.293612 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:39.293636 | orchestrator | 2026-01-13 00:41:39.293656 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-13 00:41:39.293675 | orchestrator | Tuesday 13 January 2026 00:41:33 +0000 (0:00:00.161) 0:00:21.291 ******* 2026-01-13 00:41:39.293692 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:39.293762 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:39.293782 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:39.293798 | orchestrator | 2026-01-13 00:41:39.293816 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-13 00:41:39.293834 | orchestrator | Tuesday 13 January 2026 00:41:33 +0000 (0:00:00.171) 0:00:21.462 ******* 2026-01-13 00:41:39.293851 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:39.293868 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:39.293885 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:39.293900 | orchestrator | 2026-01-13 00:41:39.293916 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-13 00:41:39.293931 | orchestrator | Tuesday 13 January 2026 00:41:33 +0000 (0:00:00.150) 0:00:21.613 ******* 2026-01-13 00:41:39.293947 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:41:39.293965 | orchestrator | 2026-01-13 00:41:39.293982 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-13 00:41:39.293999 | orchestrator | Tuesday 13 January 2026 00:41:34 +0000 (0:00:00.498) 0:00:22.111 ******* 2026-01-13 00:41:39.294087 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:41:39.294108 | orchestrator | 2026-01-13 00:41:39.294126 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-13 00:41:39.294144 | orchestrator | Tuesday 13 January 2026 00:41:34 +0000 (0:00:00.544) 0:00:22.656 ******* 2026-01-13 00:41:39.294162 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:41:39.294181 | orchestrator | 2026-01-13 00:41:39.294218 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-13 00:41:39.294249 | orchestrator | Tuesday 13 January 2026 00:41:35 +0000 (0:00:00.149) 0:00:22.806 ******* 2026-01-13 00:41:39.294267 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'vg_name': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'}) 2026-01-13 00:41:39.294285 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'vg_name': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'}) 2026-01-13 00:41:39.294301 | orchestrator | 2026-01-13 00:41:39.294316 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-13 00:41:39.294330 | orchestrator | Tuesday 13 January 2026 00:41:35 +0000 (0:00:00.166) 0:00:22.972 ******* 2026-01-13 00:41:39.294370 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:39.294387 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:39.294402 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:39.294416 | orchestrator | 2026-01-13 00:41:39.294430 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-13 00:41:39.294444 | orchestrator | Tuesday 13 January 2026 00:41:35 +0000 (0:00:00.377) 0:00:23.349 ******* 2026-01-13 00:41:39.294459 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:39.294474 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:39.294489 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:39.294504 | orchestrator | 2026-01-13 00:41:39.294518 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-13 00:41:39.294532 | orchestrator | Tuesday 13 January 2026 00:41:35 +0000 (0:00:00.167) 0:00:23.516 ******* 2026-01-13 00:41:39.294546 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'})  2026-01-13 00:41:39.294561 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'})  2026-01-13 00:41:39.294576 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:41:39.294591 | orchestrator | 2026-01-13 00:41:39.294605 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-13 00:41:39.294619 | orchestrator | Tuesday 13 January 2026 00:41:35 +0000 (0:00:00.173) 0:00:23.690 ******* 2026-01-13 00:41:39.294653 | orchestrator | ok: [testbed-node-3] => { 2026-01-13 00:41:39.294670 | orchestrator |  "lvm_report": { 2026-01-13 00:41:39.294684 | orchestrator |  "lv": [ 2026-01-13 00:41:39.294698 | orchestrator |  { 2026-01-13 00:41:39.294735 | orchestrator |  "lv_name": "osd-block-03961d85-1922-5669-8251-0ccc6cca9fac", 2026-01-13 00:41:39.294750 | orchestrator |  "vg_name": "ceph-03961d85-1922-5669-8251-0ccc6cca9fac" 2026-01-13 00:41:39.294764 | orchestrator |  }, 2026-01-13 00:41:39.294778 | orchestrator |  { 2026-01-13 00:41:39.294791 | orchestrator |  "lv_name": "osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9", 2026-01-13 00:41:39.294805 | orchestrator |  "vg_name": "ceph-b9be54a9-cd9c-568c-9220-61b18da052d9" 2026-01-13 00:41:39.294817 | orchestrator |  } 2026-01-13 00:41:39.294831 | orchestrator |  ], 2026-01-13 00:41:39.294844 | orchestrator |  "pv": [ 2026-01-13 00:41:39.294858 | orchestrator |  { 2026-01-13 00:41:39.294872 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-13 00:41:39.294885 | orchestrator |  "vg_name": "ceph-b9be54a9-cd9c-568c-9220-61b18da052d9" 2026-01-13 00:41:39.294898 | orchestrator |  }, 2026-01-13 00:41:39.294911 | orchestrator |  { 2026-01-13 00:41:39.294924 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-13 00:41:39.294938 | orchestrator |  "vg_name": "ceph-03961d85-1922-5669-8251-0ccc6cca9fac" 2026-01-13 00:41:39.294970 | orchestrator |  } 2026-01-13 00:41:39.294984 | orchestrator |  ] 2026-01-13 00:41:39.294997 | orchestrator |  } 2026-01-13 00:41:39.295010 | orchestrator | } 2026-01-13 00:41:39.295023 | orchestrator | 2026-01-13 00:41:39.295036 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-13 00:41:39.295050 | orchestrator | 2026-01-13 00:41:39.295063 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-13 00:41:39.295088 | orchestrator | Tuesday 13 January 2026 00:41:36 +0000 (0:00:00.317) 0:00:24.007 ******* 2026-01-13 00:41:39.295102 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-13 00:41:39.295115 | orchestrator | 2026-01-13 00:41:39.295128 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-13 00:41:39.295141 | orchestrator | Tuesday 13 January 2026 00:41:36 +0000 (0:00:00.235) 0:00:24.242 ******* 2026-01-13 00:41:39.295154 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:41:39.295167 | orchestrator | 2026-01-13 00:41:39.295180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:39.295195 | orchestrator | Tuesday 13 January 2026 00:41:36 +0000 (0:00:00.238) 0:00:24.480 ******* 2026-01-13 00:41:39.295208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-13 00:41:39.295221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-13 00:41:39.295234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-13 00:41:39.295247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-13 00:41:39.295261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-13 00:41:39.295274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-13 00:41:39.295294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-13 00:41:39.295307 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-13 00:41:39.295320 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-13 00:41:39.295333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-13 00:41:39.295346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-13 00:41:39.295359 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-13 00:41:39.295372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-13 00:41:39.295385 | orchestrator | 2026-01-13 00:41:39.295398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:39.295411 | orchestrator | Tuesday 13 January 2026 00:41:37 +0000 (0:00:00.497) 0:00:24.978 ******* 2026-01-13 00:41:39.295424 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:39.295437 | orchestrator | 2026-01-13 00:41:39.295451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:39.295464 | orchestrator | Tuesday 13 January 2026 00:41:37 +0000 (0:00:00.201) 0:00:25.179 ******* 2026-01-13 00:41:39.295477 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:39.295491 | orchestrator | 2026-01-13 00:41:39.295504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:39.295518 | orchestrator | Tuesday 13 January 2026 00:41:37 +0000 (0:00:00.215) 0:00:25.395 ******* 2026-01-13 00:41:39.295532 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:39.295546 | orchestrator | 2026-01-13 00:41:39.295559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:39.295572 | orchestrator | Tuesday 13 January 2026 00:41:38 +0000 (0:00:00.874) 0:00:26.270 ******* 2026-01-13 00:41:39.295582 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:39.295591 | orchestrator | 2026-01-13 00:41:39.295604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:39.295624 | orchestrator | Tuesday 13 January 2026 00:41:38 +0000 (0:00:00.258) 0:00:26.528 ******* 2026-01-13 00:41:39.295639 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:39.295650 | orchestrator | 2026-01-13 00:41:39.295661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:39.295682 | orchestrator | Tuesday 13 January 2026 00:41:39 +0000 (0:00:00.233) 0:00:26.762 ******* 2026-01-13 00:41:39.295695 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:39.295730 | orchestrator | 2026-01-13 00:41:39.295753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:50.361305 | orchestrator | Tuesday 13 January 2026 00:41:39 +0000 (0:00:00.216) 0:00:26.978 ******* 2026-01-13 00:41:50.361427 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.361446 | orchestrator | 2026-01-13 00:41:50.361459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:50.361470 | orchestrator | Tuesday 13 January 2026 00:41:39 +0000 (0:00:00.230) 0:00:27.209 ******* 2026-01-13 00:41:50.361481 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.361492 | orchestrator | 2026-01-13 00:41:50.361504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:50.361515 | orchestrator | Tuesday 13 January 2026 00:41:39 +0000 (0:00:00.178) 0:00:27.387 ******* 2026-01-13 00:41:50.361526 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d) 2026-01-13 00:41:50.361538 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d) 2026-01-13 00:41:50.361549 | orchestrator | 2026-01-13 00:41:50.361560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:50.361584 | orchestrator | Tuesday 13 January 2026 00:41:40 +0000 (0:00:00.420) 0:00:27.808 ******* 2026-01-13 00:41:50.361595 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6ad71b9e-76db-4ac5-b372-050f59253056) 2026-01-13 00:41:50.361606 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6ad71b9e-76db-4ac5-b372-050f59253056) 2026-01-13 00:41:50.361617 | orchestrator | 2026-01-13 00:41:50.361628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:50.361639 | orchestrator | Tuesday 13 January 2026 00:41:40 +0000 (0:00:00.408) 0:00:28.217 ******* 2026-01-13 00:41:50.361650 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9db8234e-f6a8-4211-a809-87a509109e78) 2026-01-13 00:41:50.361661 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9db8234e-f6a8-4211-a809-87a509109e78) 2026-01-13 00:41:50.361672 | orchestrator | 2026-01-13 00:41:50.361682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:50.361694 | orchestrator | Tuesday 13 January 2026 00:41:40 +0000 (0:00:00.399) 0:00:28.616 ******* 2026-01-13 00:41:50.361704 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5c0bff01-3898-4d25-903e-2ecdf087243c) 2026-01-13 00:41:50.361744 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5c0bff01-3898-4d25-903e-2ecdf087243c) 2026-01-13 00:41:50.361756 | orchestrator | 2026-01-13 00:41:50.361774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:41:50.361792 | orchestrator | Tuesday 13 January 2026 00:41:41 +0000 (0:00:00.614) 0:00:29.231 ******* 2026-01-13 00:41:50.361810 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-13 00:41:50.361829 | orchestrator | 2026-01-13 00:41:50.361850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:50.361868 | orchestrator | Tuesday 13 January 2026 00:41:42 +0000 (0:00:00.537) 0:00:29.769 ******* 2026-01-13 00:41:50.361900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-13 00:41:50.361914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-13 00:41:50.361927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-13 00:41:50.361940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-13 00:41:50.361953 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-13 00:41:50.361992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-13 00:41:50.362005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-13 00:41:50.362074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-13 00:41:50.362088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-13 00:41:50.362099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-13 00:41:50.362111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-13 00:41:50.362124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-13 00:41:50.362136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-13 00:41:50.362148 | orchestrator | 2026-01-13 00:41:50.362159 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:50.362170 | orchestrator | Tuesday 13 January 2026 00:41:42 +0000 (0:00:00.838) 0:00:30.607 ******* 2026-01-13 00:41:50.362181 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.362191 | orchestrator | 2026-01-13 00:41:50.362203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:50.362214 | orchestrator | Tuesday 13 January 2026 00:41:43 +0000 (0:00:00.202) 0:00:30.810 ******* 2026-01-13 00:41:50.362225 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.362235 | orchestrator | 2026-01-13 00:41:50.362246 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:50.362257 | orchestrator | Tuesday 13 January 2026 00:41:43 +0000 (0:00:00.202) 0:00:31.013 ******* 2026-01-13 00:41:50.362268 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.362279 | orchestrator | 2026-01-13 00:41:50.362311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:50.362330 | orchestrator | Tuesday 13 January 2026 00:41:43 +0000 (0:00:00.198) 0:00:31.211 ******* 2026-01-13 00:41:50.362349 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.362366 | orchestrator | 2026-01-13 00:41:50.362384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:50.362401 | orchestrator | Tuesday 13 January 2026 00:41:43 +0000 (0:00:00.196) 0:00:31.408 ******* 2026-01-13 00:41:50.362418 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.362436 | orchestrator | 2026-01-13 00:41:50.362453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:50.362472 | orchestrator | Tuesday 13 January 2026 00:41:43 +0000 (0:00:00.194) 0:00:31.603 ******* 2026-01-13 00:41:50.362490 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.362509 | orchestrator | 2026-01-13 00:41:50.362527 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:50.362539 | orchestrator | Tuesday 13 January 2026 00:41:44 +0000 (0:00:00.200) 0:00:31.804 ******* 2026-01-13 00:41:50.362550 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.362560 | orchestrator | 2026-01-13 00:41:50.362571 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:50.362582 | orchestrator | Tuesday 13 January 2026 00:41:44 +0000 (0:00:00.214) 0:00:32.019 ******* 2026-01-13 00:41:50.362592 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.362603 | orchestrator | 2026-01-13 00:41:50.362614 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:50.362624 | orchestrator | Tuesday 13 January 2026 00:41:44 +0000 (0:00:00.190) 0:00:32.209 ******* 2026-01-13 00:41:50.362635 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-13 00:41:50.362646 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-13 00:41:50.362657 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-13 00:41:50.362668 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-13 00:41:50.362690 | orchestrator | 2026-01-13 00:41:50.362701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:50.362781 | orchestrator | Tuesday 13 January 2026 00:41:45 +0000 (0:00:00.831) 0:00:33.040 ******* 2026-01-13 00:41:50.362793 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.362804 | orchestrator | 2026-01-13 00:41:50.362815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:50.362825 | orchestrator | Tuesday 13 January 2026 00:41:45 +0000 (0:00:00.178) 0:00:33.219 ******* 2026-01-13 00:41:50.362836 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.362847 | orchestrator | 2026-01-13 00:41:50.362857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:50.362868 | orchestrator | Tuesday 13 January 2026 00:41:46 +0000 (0:00:00.619) 0:00:33.838 ******* 2026-01-13 00:41:50.362878 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.362913 | orchestrator | 2026-01-13 00:41:50.362925 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:41:50.362935 | orchestrator | Tuesday 13 January 2026 00:41:46 +0000 (0:00:00.223) 0:00:34.062 ******* 2026-01-13 00:41:50.362946 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.362957 | orchestrator | 2026-01-13 00:41:50.362967 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-13 00:41:50.362978 | orchestrator | Tuesday 13 January 2026 00:41:46 +0000 (0:00:00.206) 0:00:34.269 ******* 2026-01-13 00:41:50.362989 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.362999 | orchestrator | 2026-01-13 00:41:50.363010 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-13 00:41:50.363021 | orchestrator | Tuesday 13 January 2026 00:41:46 +0000 (0:00:00.149) 0:00:34.418 ******* 2026-01-13 00:41:50.363032 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'}}) 2026-01-13 00:41:50.363043 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2b3e8737-91e3-53c0-9b3a-5288a4111b63'}}) 2026-01-13 00:41:50.363054 | orchestrator | 2026-01-13 00:41:50.363087 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-13 00:41:50.363098 | orchestrator | Tuesday 13 January 2026 00:41:46 +0000 (0:00:00.199) 0:00:34.617 ******* 2026-01-13 00:41:50.363111 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'}) 2026-01-13 00:41:50.363123 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'}) 2026-01-13 00:41:50.363134 | orchestrator | 2026-01-13 00:41:50.363144 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-13 00:41:50.363155 | orchestrator | Tuesday 13 January 2026 00:41:48 +0000 (0:00:01.893) 0:00:36.511 ******* 2026-01-13 00:41:50.363166 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:41:50.363178 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:41:50.363189 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:50.363200 | orchestrator | 2026-01-13 00:41:50.363210 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-13 00:41:50.363221 | orchestrator | Tuesday 13 January 2026 00:41:48 +0000 (0:00:00.149) 0:00:36.661 ******* 2026-01-13 00:41:50.363232 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'}) 2026-01-13 00:41:50.363252 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'}) 2026-01-13 00:41:55.943423 | orchestrator | 2026-01-13 00:41:55.943517 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-13 00:41:55.943528 | orchestrator | Tuesday 13 January 2026 00:41:50 +0000 (0:00:01.382) 0:00:38.043 ******* 2026-01-13 00:41:55.943550 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:41:55.943559 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:41:55.943565 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.943573 | orchestrator | 2026-01-13 00:41:55.943580 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-13 00:41:55.943587 | orchestrator | Tuesday 13 January 2026 00:41:50 +0000 (0:00:00.157) 0:00:38.201 ******* 2026-01-13 00:41:55.943593 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.943600 | orchestrator | 2026-01-13 00:41:55.943606 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-13 00:41:55.943613 | orchestrator | Tuesday 13 January 2026 00:41:50 +0000 (0:00:00.152) 0:00:38.354 ******* 2026-01-13 00:41:55.943620 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:41:55.943626 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:41:55.943632 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.943638 | orchestrator | 2026-01-13 00:41:55.943644 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-13 00:41:55.943651 | orchestrator | Tuesday 13 January 2026 00:41:50 +0000 (0:00:00.144) 0:00:38.499 ******* 2026-01-13 00:41:55.943657 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.943663 | orchestrator | 2026-01-13 00:41:55.943669 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-13 00:41:55.943675 | orchestrator | Tuesday 13 January 2026 00:41:50 +0000 (0:00:00.146) 0:00:38.645 ******* 2026-01-13 00:41:55.943682 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:41:55.943688 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:41:55.943694 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.943701 | orchestrator | 2026-01-13 00:41:55.943757 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-13 00:41:55.943766 | orchestrator | Tuesday 13 January 2026 00:41:51 +0000 (0:00:00.368) 0:00:39.014 ******* 2026-01-13 00:41:55.943773 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.943779 | orchestrator | 2026-01-13 00:41:55.943785 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-13 00:41:55.943792 | orchestrator | Tuesday 13 January 2026 00:41:51 +0000 (0:00:00.140) 0:00:39.155 ******* 2026-01-13 00:41:55.943798 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:41:55.943804 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:41:55.943810 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.943816 | orchestrator | 2026-01-13 00:41:55.943823 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-13 00:41:55.943829 | orchestrator | Tuesday 13 January 2026 00:41:51 +0000 (0:00:00.161) 0:00:39.317 ******* 2026-01-13 00:41:55.943835 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:41:55.943863 | orchestrator | 2026-01-13 00:41:55.943870 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-13 00:41:55.943876 | orchestrator | Tuesday 13 January 2026 00:41:51 +0000 (0:00:00.141) 0:00:39.458 ******* 2026-01-13 00:41:55.943883 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:41:55.943889 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:41:55.943895 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.943901 | orchestrator | 2026-01-13 00:41:55.943907 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-13 00:41:55.943914 | orchestrator | Tuesday 13 January 2026 00:41:51 +0000 (0:00:00.166) 0:00:39.624 ******* 2026-01-13 00:41:55.943920 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:41:55.943926 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:41:55.943932 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.943938 | orchestrator | 2026-01-13 00:41:55.943944 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-13 00:41:55.943964 | orchestrator | Tuesday 13 January 2026 00:41:52 +0000 (0:00:00.157) 0:00:39.782 ******* 2026-01-13 00:41:55.943971 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:41:55.943977 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:41:55.943983 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.943990 | orchestrator | 2026-01-13 00:41:55.943996 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-13 00:41:55.944002 | orchestrator | Tuesday 13 January 2026 00:41:52 +0000 (0:00:00.155) 0:00:39.937 ******* 2026-01-13 00:41:55.944008 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.944014 | orchestrator | 2026-01-13 00:41:55.944020 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-13 00:41:55.944026 | orchestrator | Tuesday 13 January 2026 00:41:52 +0000 (0:00:00.126) 0:00:40.064 ******* 2026-01-13 00:41:55.944032 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.944038 | orchestrator | 2026-01-13 00:41:55.944044 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-13 00:41:55.944050 | orchestrator | Tuesday 13 January 2026 00:41:52 +0000 (0:00:00.142) 0:00:40.206 ******* 2026-01-13 00:41:55.944056 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.944062 | orchestrator | 2026-01-13 00:41:55.944069 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-13 00:41:55.944075 | orchestrator | Tuesday 13 January 2026 00:41:52 +0000 (0:00:00.131) 0:00:40.338 ******* 2026-01-13 00:41:55.944081 | orchestrator | ok: [testbed-node-4] => { 2026-01-13 00:41:55.944087 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-13 00:41:55.944093 | orchestrator | } 2026-01-13 00:41:55.944100 | orchestrator | 2026-01-13 00:41:55.944106 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-13 00:41:55.944112 | orchestrator | Tuesday 13 January 2026 00:41:52 +0000 (0:00:00.133) 0:00:40.471 ******* 2026-01-13 00:41:55.944118 | orchestrator | ok: [testbed-node-4] => { 2026-01-13 00:41:55.944124 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-13 00:41:55.944131 | orchestrator | } 2026-01-13 00:41:55.944137 | orchestrator | 2026-01-13 00:41:55.944143 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-13 00:41:55.944149 | orchestrator | Tuesday 13 January 2026 00:41:52 +0000 (0:00:00.135) 0:00:40.607 ******* 2026-01-13 00:41:55.944161 | orchestrator | ok: [testbed-node-4] => { 2026-01-13 00:41:55.944167 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-13 00:41:55.944174 | orchestrator | } 2026-01-13 00:41:55.944179 | orchestrator | 2026-01-13 00:41:55.944185 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-13 00:41:55.944190 | orchestrator | Tuesday 13 January 2026 00:41:53 +0000 (0:00:00.348) 0:00:40.955 ******* 2026-01-13 00:41:55.944196 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:41:55.944202 | orchestrator | 2026-01-13 00:41:55.944209 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-13 00:41:55.944218 | orchestrator | Tuesday 13 January 2026 00:41:53 +0000 (0:00:00.545) 0:00:41.500 ******* 2026-01-13 00:41:55.944225 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:41:55.944231 | orchestrator | 2026-01-13 00:41:55.944237 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-13 00:41:55.944243 | orchestrator | Tuesday 13 January 2026 00:41:54 +0000 (0:00:00.556) 0:00:42.057 ******* 2026-01-13 00:41:55.944249 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:41:55.944255 | orchestrator | 2026-01-13 00:41:55.944261 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-13 00:41:55.944268 | orchestrator | Tuesday 13 January 2026 00:41:54 +0000 (0:00:00.533) 0:00:42.590 ******* 2026-01-13 00:41:55.944274 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:41:55.944280 | orchestrator | 2026-01-13 00:41:55.944286 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-13 00:41:55.944292 | orchestrator | Tuesday 13 January 2026 00:41:55 +0000 (0:00:00.160) 0:00:42.751 ******* 2026-01-13 00:41:55.944298 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.944304 | orchestrator | 2026-01-13 00:41:55.944310 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-13 00:41:55.944316 | orchestrator | Tuesday 13 January 2026 00:41:55 +0000 (0:00:00.106) 0:00:42.858 ******* 2026-01-13 00:41:55.944322 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.944329 | orchestrator | 2026-01-13 00:41:55.944335 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-13 00:41:55.944341 | orchestrator | Tuesday 13 January 2026 00:41:55 +0000 (0:00:00.103) 0:00:42.961 ******* 2026-01-13 00:41:55.944347 | orchestrator | ok: [testbed-node-4] => { 2026-01-13 00:41:55.944353 | orchestrator |  "vgs_report": { 2026-01-13 00:41:55.944359 | orchestrator |  "vg": [] 2026-01-13 00:41:55.944365 | orchestrator |  } 2026-01-13 00:41:55.944372 | orchestrator | } 2026-01-13 00:41:55.944378 | orchestrator | 2026-01-13 00:41:55.944384 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-13 00:41:55.944390 | orchestrator | Tuesday 13 January 2026 00:41:55 +0000 (0:00:00.151) 0:00:43.112 ******* 2026-01-13 00:41:55.944396 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.944402 | orchestrator | 2026-01-13 00:41:55.944408 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-13 00:41:55.944415 | orchestrator | Tuesday 13 January 2026 00:41:55 +0000 (0:00:00.132) 0:00:43.245 ******* 2026-01-13 00:41:55.944421 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.944427 | orchestrator | 2026-01-13 00:41:55.944433 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-13 00:41:55.944439 | orchestrator | Tuesday 13 January 2026 00:41:55 +0000 (0:00:00.127) 0:00:43.373 ******* 2026-01-13 00:41:55.944445 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.944451 | orchestrator | 2026-01-13 00:41:55.944457 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-13 00:41:55.944464 | orchestrator | Tuesday 13 January 2026 00:41:55 +0000 (0:00:00.127) 0:00:43.501 ******* 2026-01-13 00:41:55.944470 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:41:55.944476 | orchestrator | 2026-01-13 00:41:55.944487 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-13 00:42:00.752240 | orchestrator | Tuesday 13 January 2026 00:41:55 +0000 (0:00:00.124) 0:00:43.625 ******* 2026-01-13 00:42:00.752348 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752360 | orchestrator | 2026-01-13 00:42:00.752368 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-13 00:42:00.752372 | orchestrator | Tuesday 13 January 2026 00:41:56 +0000 (0:00:00.335) 0:00:43.960 ******* 2026-01-13 00:42:00.752376 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752380 | orchestrator | 2026-01-13 00:42:00.752384 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-13 00:42:00.752388 | orchestrator | Tuesday 13 January 2026 00:41:56 +0000 (0:00:00.139) 0:00:44.100 ******* 2026-01-13 00:42:00.752392 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752395 | orchestrator | 2026-01-13 00:42:00.752399 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-13 00:42:00.752403 | orchestrator | Tuesday 13 January 2026 00:41:56 +0000 (0:00:00.128) 0:00:44.229 ******* 2026-01-13 00:42:00.752406 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752410 | orchestrator | 2026-01-13 00:42:00.752414 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-13 00:42:00.752418 | orchestrator | Tuesday 13 January 2026 00:41:56 +0000 (0:00:00.133) 0:00:44.362 ******* 2026-01-13 00:42:00.752421 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752425 | orchestrator | 2026-01-13 00:42:00.752429 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-13 00:42:00.752432 | orchestrator | Tuesday 13 January 2026 00:41:56 +0000 (0:00:00.133) 0:00:44.496 ******* 2026-01-13 00:42:00.752436 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752440 | orchestrator | 2026-01-13 00:42:00.752443 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-13 00:42:00.752447 | orchestrator | Tuesday 13 January 2026 00:41:56 +0000 (0:00:00.143) 0:00:44.639 ******* 2026-01-13 00:42:00.752451 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752454 | orchestrator | 2026-01-13 00:42:00.752458 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-13 00:42:00.752462 | orchestrator | Tuesday 13 January 2026 00:41:57 +0000 (0:00:00.147) 0:00:44.787 ******* 2026-01-13 00:42:00.752465 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752469 | orchestrator | 2026-01-13 00:42:00.752473 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-13 00:42:00.752477 | orchestrator | Tuesday 13 January 2026 00:41:57 +0000 (0:00:00.148) 0:00:44.935 ******* 2026-01-13 00:42:00.752480 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752484 | orchestrator | 2026-01-13 00:42:00.752488 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-13 00:42:00.752491 | orchestrator | Tuesday 13 January 2026 00:41:57 +0000 (0:00:00.134) 0:00:45.069 ******* 2026-01-13 00:42:00.752495 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752499 | orchestrator | 2026-01-13 00:42:00.752503 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-13 00:42:00.752507 | orchestrator | Tuesday 13 January 2026 00:41:57 +0000 (0:00:00.148) 0:00:45.218 ******* 2026-01-13 00:42:00.752512 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:42:00.752517 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:42:00.752521 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752525 | orchestrator | 2026-01-13 00:42:00.752528 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-13 00:42:00.752532 | orchestrator | Tuesday 13 January 2026 00:41:57 +0000 (0:00:00.164) 0:00:45.383 ******* 2026-01-13 00:42:00.752536 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:42:00.752544 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:42:00.752547 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752551 | orchestrator | 2026-01-13 00:42:00.752555 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-13 00:42:00.752559 | orchestrator | Tuesday 13 January 2026 00:41:57 +0000 (0:00:00.145) 0:00:45.529 ******* 2026-01-13 00:42:00.752562 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:42:00.752566 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:42:00.752570 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752574 | orchestrator | 2026-01-13 00:42:00.752577 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-13 00:42:00.752581 | orchestrator | Tuesday 13 January 2026 00:41:58 +0000 (0:00:00.347) 0:00:45.876 ******* 2026-01-13 00:42:00.752585 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:42:00.752588 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:42:00.752592 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752596 | orchestrator | 2026-01-13 00:42:00.752613 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-13 00:42:00.752617 | orchestrator | Tuesday 13 January 2026 00:41:58 +0000 (0:00:00.158) 0:00:46.035 ******* 2026-01-13 00:42:00.752621 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:42:00.752625 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:42:00.752628 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752632 | orchestrator | 2026-01-13 00:42:00.752636 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-13 00:42:00.752640 | orchestrator | Tuesday 13 January 2026 00:41:58 +0000 (0:00:00.175) 0:00:46.211 ******* 2026-01-13 00:42:00.752644 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:42:00.752648 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:42:00.752651 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752655 | orchestrator | 2026-01-13 00:42:00.752659 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-13 00:42:00.752663 | orchestrator | Tuesday 13 January 2026 00:41:58 +0000 (0:00:00.154) 0:00:46.366 ******* 2026-01-13 00:42:00.752718 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:42:00.752726 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:42:00.752732 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752739 | orchestrator | 2026-01-13 00:42:00.752745 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-13 00:42:00.752751 | orchestrator | Tuesday 13 January 2026 00:41:58 +0000 (0:00:00.179) 0:00:46.545 ******* 2026-01-13 00:42:00.752763 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:42:00.752773 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:42:00.752778 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752782 | orchestrator | 2026-01-13 00:42:00.752789 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-13 00:42:00.752795 | orchestrator | Tuesday 13 January 2026 00:41:59 +0000 (0:00:00.169) 0:00:46.715 ******* 2026-01-13 00:42:00.752801 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:42:00.752807 | orchestrator | 2026-01-13 00:42:00.752813 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-13 00:42:00.752819 | orchestrator | Tuesday 13 January 2026 00:41:59 +0000 (0:00:00.501) 0:00:47.216 ******* 2026-01-13 00:42:00.752826 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:42:00.752831 | orchestrator | 2026-01-13 00:42:00.752838 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-13 00:42:00.752844 | orchestrator | Tuesday 13 January 2026 00:42:00 +0000 (0:00:00.515) 0:00:47.732 ******* 2026-01-13 00:42:00.752850 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:42:00.752856 | orchestrator | 2026-01-13 00:42:00.752863 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-13 00:42:00.752869 | orchestrator | Tuesday 13 January 2026 00:42:00 +0000 (0:00:00.158) 0:00:47.890 ******* 2026-01-13 00:42:00.752876 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'vg_name': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'}) 2026-01-13 00:42:00.752882 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'vg_name': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'}) 2026-01-13 00:42:00.752886 | orchestrator | 2026-01-13 00:42:00.752891 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-13 00:42:00.752895 | orchestrator | Tuesday 13 January 2026 00:42:00 +0000 (0:00:00.195) 0:00:48.086 ******* 2026-01-13 00:42:00.752899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:42:00.752904 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:42:00.752908 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:00.752912 | orchestrator | 2026-01-13 00:42:00.752917 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-13 00:42:00.752921 | orchestrator | Tuesday 13 January 2026 00:42:00 +0000 (0:00:00.173) 0:00:48.259 ******* 2026-01-13 00:42:00.752925 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:42:00.752935 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:42:06.901561 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:06.901675 | orchestrator | 2026-01-13 00:42:06.901692 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-13 00:42:06.901759 | orchestrator | Tuesday 13 January 2026 00:42:00 +0000 (0:00:00.176) 0:00:48.435 ******* 2026-01-13 00:42:06.901772 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'})  2026-01-13 00:42:06.901786 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'})  2026-01-13 00:42:06.901798 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:06.901833 | orchestrator | 2026-01-13 00:42:06.901846 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-13 00:42:06.901858 | orchestrator | Tuesday 13 January 2026 00:42:00 +0000 (0:00:00.176) 0:00:48.612 ******* 2026-01-13 00:42:06.901869 | orchestrator | ok: [testbed-node-4] => { 2026-01-13 00:42:06.901881 | orchestrator |  "lvm_report": { 2026-01-13 00:42:06.901894 | orchestrator |  "lv": [ 2026-01-13 00:42:06.901905 | orchestrator |  { 2026-01-13 00:42:06.901917 | orchestrator |  "lv_name": "osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5", 2026-01-13 00:42:06.901929 | orchestrator |  "vg_name": "ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5" 2026-01-13 00:42:06.901941 | orchestrator |  }, 2026-01-13 00:42:06.901953 | orchestrator |  { 2026-01-13 00:42:06.901964 | orchestrator |  "lv_name": "osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63", 2026-01-13 00:42:06.901976 | orchestrator |  "vg_name": "ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63" 2026-01-13 00:42:06.901988 | orchestrator |  } 2026-01-13 00:42:06.901999 | orchestrator |  ], 2026-01-13 00:42:06.902011 | orchestrator |  "pv": [ 2026-01-13 00:42:06.902078 | orchestrator |  { 2026-01-13 00:42:06.902091 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-13 00:42:06.902104 | orchestrator |  "vg_name": "ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5" 2026-01-13 00:42:06.902118 | orchestrator |  }, 2026-01-13 00:42:06.902131 | orchestrator |  { 2026-01-13 00:42:06.902172 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-13 00:42:06.902187 | orchestrator |  "vg_name": "ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63" 2026-01-13 00:42:06.902200 | orchestrator |  } 2026-01-13 00:42:06.902214 | orchestrator |  ] 2026-01-13 00:42:06.902227 | orchestrator |  } 2026-01-13 00:42:06.902241 | orchestrator | } 2026-01-13 00:42:06.902255 | orchestrator | 2026-01-13 00:42:06.902268 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-13 00:42:06.902282 | orchestrator | 2026-01-13 00:42:06.902295 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-13 00:42:06.902322 | orchestrator | Tuesday 13 January 2026 00:42:01 +0000 (0:00:00.545) 0:00:49.157 ******* 2026-01-13 00:42:06.902336 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-13 00:42:06.902350 | orchestrator | 2026-01-13 00:42:06.902364 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-13 00:42:06.902378 | orchestrator | Tuesday 13 January 2026 00:42:01 +0000 (0:00:00.259) 0:00:49.416 ******* 2026-01-13 00:42:06.902391 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:42:06.902405 | orchestrator | 2026-01-13 00:42:06.902418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:42:06.902432 | orchestrator | Tuesday 13 January 2026 00:42:01 +0000 (0:00:00.250) 0:00:49.666 ******* 2026-01-13 00:42:06.902447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-13 00:42:06.902460 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-13 00:42:06.902473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-13 00:42:06.902485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-13 00:42:06.902498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-13 00:42:06.902510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-13 00:42:06.902522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-13 00:42:06.902535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-13 00:42:06.902547 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-13 00:42:06.902569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-13 00:42:06.902581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-13 00:42:06.902592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-13 00:42:06.902605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-13 00:42:06.902617 | orchestrator | 2026-01-13 00:42:06.902633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:42:06.902646 | orchestrator | Tuesday 13 January 2026 00:42:02 +0000 (0:00:00.429) 0:00:50.096 ******* 2026-01-13 00:42:06.902658 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:06.902671 | orchestrator | 2026-01-13 00:42:06.902683 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:42:06.902730 | orchestrator | Tuesday 13 January 2026 00:42:02 +0000 (0:00:00.236) 0:00:50.332 ******* 2026-01-13 00:42:06.902743 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:06.902755 | orchestrator | 2026-01-13 00:42:06.902767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:42:06.902795 | orchestrator | Tuesday 13 January 2026 00:42:02 +0000 (0:00:00.176) 0:00:50.509 ******* 2026-01-13 00:42:06.902807 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:06.902818 | orchestrator | 2026-01-13 00:42:06.902829 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:42:06.902839 | orchestrator | Tuesday 13 January 2026 00:42:03 +0000 (0:00:00.192) 0:00:50.701 ******* 2026-01-13 00:42:06.902851 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:06.902862 | orchestrator | 2026-01-13 00:42:06.902874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:42:06.902886 | orchestrator | Tuesday 13 January 2026 00:42:03 +0000 (0:00:00.209) 0:00:50.911 ******* 2026-01-13 00:42:06.902897 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:06.902909 | orchestrator | 2026-01-13 00:42:06.902920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:42:06.902930 | orchestrator | Tuesday 13 January 2026 00:42:03 +0000 (0:00:00.592) 0:00:51.503 ******* 2026-01-13 00:42:06.902941 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:06.902952 | orchestrator | 2026-01-13 00:42:06.902964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:42:06.902976 | orchestrator | Tuesday 13 January 2026 00:42:04 +0000 (0:00:00.200) 0:00:51.704 ******* 2026-01-13 00:42:06.902987 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:06.902999 | orchestrator | 2026-01-13 00:42:06.903010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:42:06.903022 | orchestrator | Tuesday 13 January 2026 00:42:04 +0000 (0:00:00.222) 0:00:51.927 ******* 2026-01-13 00:42:06.903034 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:06.903045 | orchestrator | 2026-01-13 00:42:06.903057 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:42:06.903068 | orchestrator | Tuesday 13 January 2026 00:42:04 +0000 (0:00:00.193) 0:00:52.120 ******* 2026-01-13 00:42:06.903080 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569) 2026-01-13 00:42:06.903093 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569) 2026-01-13 00:42:06.903104 | orchestrator | 2026-01-13 00:42:06.903116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:42:06.903127 | orchestrator | Tuesday 13 January 2026 00:42:04 +0000 (0:00:00.393) 0:00:52.514 ******* 2026-01-13 00:42:06.903139 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_79922d84-0445-4535-976b-32e74e35a748) 2026-01-13 00:42:06.903150 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_79922d84-0445-4535-976b-32e74e35a748) 2026-01-13 00:42:06.903161 | orchestrator | 2026-01-13 00:42:06.903180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:42:06.903198 | orchestrator | Tuesday 13 January 2026 00:42:05 +0000 (0:00:00.406) 0:00:52.920 ******* 2026-01-13 00:42:06.903209 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f69e02e7-d854-4ded-bb8d-51d0e0400336) 2026-01-13 00:42:06.903221 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f69e02e7-d854-4ded-bb8d-51d0e0400336) 2026-01-13 00:42:06.903233 | orchestrator | 2026-01-13 00:42:06.903244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:42:06.903256 | orchestrator | Tuesday 13 January 2026 00:42:05 +0000 (0:00:00.446) 0:00:53.366 ******* 2026-01-13 00:42:06.903268 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5295d09e-fddd-4452-8a25-9ba23e2b95ae) 2026-01-13 00:42:06.903279 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5295d09e-fddd-4452-8a25-9ba23e2b95ae) 2026-01-13 00:42:06.903291 | orchestrator | 2026-01-13 00:42:06.903302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-13 00:42:06.903314 | orchestrator | Tuesday 13 January 2026 00:42:06 +0000 (0:00:00.445) 0:00:53.811 ******* 2026-01-13 00:42:06.903326 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-13 00:42:06.903338 | orchestrator | 2026-01-13 00:42:06.903349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:42:06.903361 | orchestrator | Tuesday 13 January 2026 00:42:06 +0000 (0:00:00.354) 0:00:54.166 ******* 2026-01-13 00:42:06.903372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-13 00:42:06.903384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-13 00:42:06.903395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-13 00:42:06.903407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-13 00:42:06.903418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-13 00:42:06.903430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-13 00:42:06.903441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-13 00:42:06.903453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-13 00:42:06.903464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-13 00:42:06.903476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-13 00:42:06.903488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-13 00:42:06.903505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-13 00:42:15.874297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-13 00:42:15.874391 | orchestrator | 2026-01-13 00:42:15.874398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:42:15.874404 | orchestrator | Tuesday 13 January 2026 00:42:06 +0000 (0:00:00.403) 0:00:54.569 ******* 2026-01-13 00:42:15.874408 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874413 | orchestrator | 2026-01-13 00:42:15.874417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:42:15.874421 | orchestrator | Tuesday 13 January 2026 00:42:07 +0000 (0:00:00.196) 0:00:54.765 ******* 2026-01-13 00:42:15.874425 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874429 | orchestrator | 2026-01-13 00:42:15.874432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:42:15.874436 | orchestrator | Tuesday 13 January 2026 00:42:07 +0000 (0:00:00.776) 0:00:55.542 ******* 2026-01-13 00:42:15.874455 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874458 | orchestrator | 2026-01-13 00:42:15.874462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:42:15.874466 | orchestrator | Tuesday 13 January 2026 00:42:08 +0000 (0:00:00.195) 0:00:55.738 ******* 2026-01-13 00:42:15.874469 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874473 | orchestrator | 2026-01-13 00:42:15.874477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:42:15.874480 | orchestrator | Tuesday 13 January 2026 00:42:08 +0000 (0:00:00.195) 0:00:55.933 ******* 2026-01-13 00:42:15.874484 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874488 | orchestrator | 2026-01-13 00:42:15.874491 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:42:15.874495 | orchestrator | Tuesday 13 January 2026 00:42:08 +0000 (0:00:00.183) 0:00:56.117 ******* 2026-01-13 00:42:15.874499 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874502 | orchestrator | 2026-01-13 00:42:15.874506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:42:15.874509 | orchestrator | Tuesday 13 January 2026 00:42:08 +0000 (0:00:00.191) 0:00:56.308 ******* 2026-01-13 00:42:15.874513 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874517 | orchestrator | 2026-01-13 00:42:15.874520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:42:15.874524 | orchestrator | Tuesday 13 January 2026 00:42:08 +0000 (0:00:00.204) 0:00:56.512 ******* 2026-01-13 00:42:15.874528 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874531 | orchestrator | 2026-01-13 00:42:15.874535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:42:15.874539 | orchestrator | Tuesday 13 January 2026 00:42:09 +0000 (0:00:00.208) 0:00:56.721 ******* 2026-01-13 00:42:15.874543 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-13 00:42:15.874547 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-13 00:42:15.874551 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-13 00:42:15.874555 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-13 00:42:15.874558 | orchestrator | 2026-01-13 00:42:15.874562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:42:15.874567 | orchestrator | Tuesday 13 January 2026 00:42:09 +0000 (0:00:00.732) 0:00:57.453 ******* 2026-01-13 00:42:15.874574 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874580 | orchestrator | 2026-01-13 00:42:15.874587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:42:15.874594 | orchestrator | Tuesday 13 January 2026 00:42:09 +0000 (0:00:00.215) 0:00:57.669 ******* 2026-01-13 00:42:15.874601 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874608 | orchestrator | 2026-01-13 00:42:15.874615 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:42:15.874622 | orchestrator | Tuesday 13 January 2026 00:42:10 +0000 (0:00:00.204) 0:00:57.874 ******* 2026-01-13 00:42:15.874628 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874635 | orchestrator | 2026-01-13 00:42:15.874641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-13 00:42:15.874650 | orchestrator | Tuesday 13 January 2026 00:42:10 +0000 (0:00:00.186) 0:00:58.060 ******* 2026-01-13 00:42:15.874658 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874668 | orchestrator | 2026-01-13 00:42:15.874677 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-13 00:42:15.874686 | orchestrator | Tuesday 13 January 2026 00:42:10 +0000 (0:00:00.220) 0:00:58.280 ******* 2026-01-13 00:42:15.874736 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874743 | orchestrator | 2026-01-13 00:42:15.874750 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-13 00:42:15.874757 | orchestrator | Tuesday 13 January 2026 00:42:10 +0000 (0:00:00.291) 0:00:58.572 ******* 2026-01-13 00:42:15.874763 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e91d200a-cf56-55df-b2f8-08f15361112f'}}) 2026-01-13 00:42:15.874777 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7ebda4f6-7b50-59b0-8273-b291dd7d1677'}}) 2026-01-13 00:42:15.874783 | orchestrator | 2026-01-13 00:42:15.874790 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-13 00:42:15.874796 | orchestrator | Tuesday 13 January 2026 00:42:11 +0000 (0:00:00.180) 0:00:58.753 ******* 2026-01-13 00:42:15.874804 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'}) 2026-01-13 00:42:15.874828 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'}) 2026-01-13 00:42:15.874835 | orchestrator | 2026-01-13 00:42:15.874842 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-13 00:42:15.874862 | orchestrator | Tuesday 13 January 2026 00:42:12 +0000 (0:00:01.810) 0:01:00.564 ******* 2026-01-13 00:42:15.874870 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:15.874879 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:15.874886 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874893 | orchestrator | 2026-01-13 00:42:15.874900 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-13 00:42:15.874908 | orchestrator | Tuesday 13 January 2026 00:42:13 +0000 (0:00:00.149) 0:01:00.713 ******* 2026-01-13 00:42:15.874915 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'}) 2026-01-13 00:42:15.874923 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'}) 2026-01-13 00:42:15.874930 | orchestrator | 2026-01-13 00:42:15.874937 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-13 00:42:15.874944 | orchestrator | Tuesday 13 January 2026 00:42:14 +0000 (0:00:01.301) 0:01:02.014 ******* 2026-01-13 00:42:15.874952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:15.874960 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:15.874967 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.874974 | orchestrator | 2026-01-13 00:42:15.874981 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-13 00:42:15.874988 | orchestrator | Tuesday 13 January 2026 00:42:14 +0000 (0:00:00.156) 0:01:02.171 ******* 2026-01-13 00:42:15.874996 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.875002 | orchestrator | 2026-01-13 00:42:15.875010 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-13 00:42:15.875017 | orchestrator | Tuesday 13 January 2026 00:42:14 +0000 (0:00:00.125) 0:01:02.296 ******* 2026-01-13 00:42:15.875028 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:15.875035 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:15.875039 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.875044 | orchestrator | 2026-01-13 00:42:15.875048 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-13 00:42:15.875056 | orchestrator | Tuesday 13 January 2026 00:42:14 +0000 (0:00:00.159) 0:01:02.456 ******* 2026-01-13 00:42:15.875060 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.875065 | orchestrator | 2026-01-13 00:42:15.875069 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-13 00:42:15.875073 | orchestrator | Tuesday 13 January 2026 00:42:14 +0000 (0:00:00.130) 0:01:02.586 ******* 2026-01-13 00:42:15.875078 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:15.875082 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:15.875086 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.875091 | orchestrator | 2026-01-13 00:42:15.875095 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-13 00:42:15.875099 | orchestrator | Tuesday 13 January 2026 00:42:15 +0000 (0:00:00.151) 0:01:02.738 ******* 2026-01-13 00:42:15.875103 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.875107 | orchestrator | 2026-01-13 00:42:15.875111 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-13 00:42:15.875116 | orchestrator | Tuesday 13 January 2026 00:42:15 +0000 (0:00:00.142) 0:01:02.880 ******* 2026-01-13 00:42:15.875120 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:15.875124 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:15.875128 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:15.875133 | orchestrator | 2026-01-13 00:42:15.875137 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-13 00:42:15.875141 | orchestrator | Tuesday 13 January 2026 00:42:15 +0000 (0:00:00.147) 0:01:03.028 ******* 2026-01-13 00:42:15.875145 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:42:15.875150 | orchestrator | 2026-01-13 00:42:15.875154 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-13 00:42:15.875158 | orchestrator | Tuesday 13 January 2026 00:42:15 +0000 (0:00:00.363) 0:01:03.391 ******* 2026-01-13 00:42:15.875166 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:21.795956 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:21.796064 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.796080 | orchestrator | 2026-01-13 00:42:21.796093 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-13 00:42:21.796105 | orchestrator | Tuesday 13 January 2026 00:42:15 +0000 (0:00:00.168) 0:01:03.560 ******* 2026-01-13 00:42:21.796118 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:21.796130 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:21.796140 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.796151 | orchestrator | 2026-01-13 00:42:21.796162 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-13 00:42:21.796173 | orchestrator | Tuesday 13 January 2026 00:42:16 +0000 (0:00:00.166) 0:01:03.726 ******* 2026-01-13 00:42:21.796184 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:21.796195 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:21.796229 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.796240 | orchestrator | 2026-01-13 00:42:21.796251 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-13 00:42:21.796262 | orchestrator | Tuesday 13 January 2026 00:42:16 +0000 (0:00:00.163) 0:01:03.890 ******* 2026-01-13 00:42:21.796272 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.796282 | orchestrator | 2026-01-13 00:42:21.796293 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-13 00:42:21.796304 | orchestrator | Tuesday 13 January 2026 00:42:16 +0000 (0:00:00.141) 0:01:04.032 ******* 2026-01-13 00:42:21.796314 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.796325 | orchestrator | 2026-01-13 00:42:21.796335 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-13 00:42:21.796346 | orchestrator | Tuesday 13 January 2026 00:42:16 +0000 (0:00:00.132) 0:01:04.164 ******* 2026-01-13 00:42:21.796358 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.796374 | orchestrator | 2026-01-13 00:42:21.796410 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-13 00:42:21.796429 | orchestrator | Tuesday 13 January 2026 00:42:16 +0000 (0:00:00.157) 0:01:04.321 ******* 2026-01-13 00:42:21.796447 | orchestrator | ok: [testbed-node-5] => { 2026-01-13 00:42:21.796464 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-13 00:42:21.796476 | orchestrator | } 2026-01-13 00:42:21.796486 | orchestrator | 2026-01-13 00:42:21.796497 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-13 00:42:21.796507 | orchestrator | Tuesday 13 January 2026 00:42:16 +0000 (0:00:00.133) 0:01:04.454 ******* 2026-01-13 00:42:21.796518 | orchestrator | ok: [testbed-node-5] => { 2026-01-13 00:42:21.796528 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-13 00:42:21.796540 | orchestrator | } 2026-01-13 00:42:21.796550 | orchestrator | 2026-01-13 00:42:21.796561 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-13 00:42:21.796572 | orchestrator | Tuesday 13 January 2026 00:42:16 +0000 (0:00:00.155) 0:01:04.610 ******* 2026-01-13 00:42:21.796582 | orchestrator | ok: [testbed-node-5] => { 2026-01-13 00:42:21.796593 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-13 00:42:21.796603 | orchestrator | } 2026-01-13 00:42:21.796614 | orchestrator | 2026-01-13 00:42:21.796625 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-13 00:42:21.796635 | orchestrator | Tuesday 13 January 2026 00:42:17 +0000 (0:00:00.140) 0:01:04.750 ******* 2026-01-13 00:42:21.796646 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:42:21.796656 | orchestrator | 2026-01-13 00:42:21.796667 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-13 00:42:21.796677 | orchestrator | Tuesday 13 January 2026 00:42:17 +0000 (0:00:00.512) 0:01:05.262 ******* 2026-01-13 00:42:21.796743 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:42:21.796763 | orchestrator | 2026-01-13 00:42:21.796782 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-13 00:42:21.796798 | orchestrator | Tuesday 13 January 2026 00:42:18 +0000 (0:00:00.531) 0:01:05.794 ******* 2026-01-13 00:42:21.796809 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:42:21.796819 | orchestrator | 2026-01-13 00:42:21.796834 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-13 00:42:21.796852 | orchestrator | Tuesday 13 January 2026 00:42:18 +0000 (0:00:00.735) 0:01:06.530 ******* 2026-01-13 00:42:21.796870 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:42:21.796887 | orchestrator | 2026-01-13 00:42:21.796903 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-13 00:42:21.796921 | orchestrator | Tuesday 13 January 2026 00:42:18 +0000 (0:00:00.154) 0:01:06.684 ******* 2026-01-13 00:42:21.796938 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.796956 | orchestrator | 2026-01-13 00:42:21.796973 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-13 00:42:21.797010 | orchestrator | Tuesday 13 January 2026 00:42:19 +0000 (0:00:00.118) 0:01:06.802 ******* 2026-01-13 00:42:21.797030 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797049 | orchestrator | 2026-01-13 00:42:21.797060 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-13 00:42:21.797071 | orchestrator | Tuesday 13 January 2026 00:42:19 +0000 (0:00:00.094) 0:01:06.897 ******* 2026-01-13 00:42:21.797082 | orchestrator | ok: [testbed-node-5] => { 2026-01-13 00:42:21.797093 | orchestrator |  "vgs_report": { 2026-01-13 00:42:21.797103 | orchestrator |  "vg": [] 2026-01-13 00:42:21.797134 | orchestrator |  } 2026-01-13 00:42:21.797146 | orchestrator | } 2026-01-13 00:42:21.797157 | orchestrator | 2026-01-13 00:42:21.797167 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-13 00:42:21.797178 | orchestrator | Tuesday 13 January 2026 00:42:19 +0000 (0:00:00.123) 0:01:07.020 ******* 2026-01-13 00:42:21.797189 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797199 | orchestrator | 2026-01-13 00:42:21.797210 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-13 00:42:21.797221 | orchestrator | Tuesday 13 January 2026 00:42:19 +0000 (0:00:00.116) 0:01:07.136 ******* 2026-01-13 00:42:21.797231 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797242 | orchestrator | 2026-01-13 00:42:21.797253 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-13 00:42:21.797263 | orchestrator | Tuesday 13 January 2026 00:42:19 +0000 (0:00:00.129) 0:01:07.266 ******* 2026-01-13 00:42:21.797274 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797284 | orchestrator | 2026-01-13 00:42:21.797295 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-13 00:42:21.797306 | orchestrator | Tuesday 13 January 2026 00:42:19 +0000 (0:00:00.155) 0:01:07.422 ******* 2026-01-13 00:42:21.797316 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797327 | orchestrator | 2026-01-13 00:42:21.797338 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-13 00:42:21.797348 | orchestrator | Tuesday 13 January 2026 00:42:19 +0000 (0:00:00.131) 0:01:07.553 ******* 2026-01-13 00:42:21.797359 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797369 | orchestrator | 2026-01-13 00:42:21.797380 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-13 00:42:21.797391 | orchestrator | Tuesday 13 January 2026 00:42:19 +0000 (0:00:00.131) 0:01:07.684 ******* 2026-01-13 00:42:21.797401 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797412 | orchestrator | 2026-01-13 00:42:21.797422 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-13 00:42:21.797433 | orchestrator | Tuesday 13 January 2026 00:42:20 +0000 (0:00:00.124) 0:01:07.809 ******* 2026-01-13 00:42:21.797443 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797454 | orchestrator | 2026-01-13 00:42:21.797465 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-13 00:42:21.797475 | orchestrator | Tuesday 13 January 2026 00:42:20 +0000 (0:00:00.126) 0:01:07.936 ******* 2026-01-13 00:42:21.797486 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797496 | orchestrator | 2026-01-13 00:42:21.797507 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-13 00:42:21.797517 | orchestrator | Tuesday 13 January 2026 00:42:20 +0000 (0:00:00.338) 0:01:08.275 ******* 2026-01-13 00:42:21.797528 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797538 | orchestrator | 2026-01-13 00:42:21.797557 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-13 00:42:21.797568 | orchestrator | Tuesday 13 January 2026 00:42:20 +0000 (0:00:00.132) 0:01:08.407 ******* 2026-01-13 00:42:21.797578 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797589 | orchestrator | 2026-01-13 00:42:21.797600 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-13 00:42:21.797632 | orchestrator | Tuesday 13 January 2026 00:42:20 +0000 (0:00:00.132) 0:01:08.540 ******* 2026-01-13 00:42:21.797643 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797654 | orchestrator | 2026-01-13 00:42:21.797665 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-13 00:42:21.797676 | orchestrator | Tuesday 13 January 2026 00:42:20 +0000 (0:00:00.119) 0:01:08.660 ******* 2026-01-13 00:42:21.797724 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797738 | orchestrator | 2026-01-13 00:42:21.797749 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-13 00:42:21.797759 | orchestrator | Tuesday 13 January 2026 00:42:21 +0000 (0:00:00.136) 0:01:08.797 ******* 2026-01-13 00:42:21.797770 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797780 | orchestrator | 2026-01-13 00:42:21.797791 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-13 00:42:21.797802 | orchestrator | Tuesday 13 January 2026 00:42:21 +0000 (0:00:00.122) 0:01:08.919 ******* 2026-01-13 00:42:21.797812 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797823 | orchestrator | 2026-01-13 00:42:21.797833 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-13 00:42:21.797844 | orchestrator | Tuesday 13 January 2026 00:42:21 +0000 (0:00:00.119) 0:01:09.039 ******* 2026-01-13 00:42:21.797854 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:21.797866 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:21.797877 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797887 | orchestrator | 2026-01-13 00:42:21.797898 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-13 00:42:21.797908 | orchestrator | Tuesday 13 January 2026 00:42:21 +0000 (0:00:00.146) 0:01:09.186 ******* 2026-01-13 00:42:21.797919 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:21.797929 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:21.797940 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:21.797951 | orchestrator | 2026-01-13 00:42:21.797961 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-13 00:42:21.797972 | orchestrator | Tuesday 13 January 2026 00:42:21 +0000 (0:00:00.146) 0:01:09.332 ******* 2026-01-13 00:42:21.797990 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:24.732958 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:24.733067 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:24.733083 | orchestrator | 2026-01-13 00:42:24.733096 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-13 00:42:24.733108 | orchestrator | Tuesday 13 January 2026 00:42:21 +0000 (0:00:00.148) 0:01:09.480 ******* 2026-01-13 00:42:24.733119 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:24.733131 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:24.733141 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:24.733152 | orchestrator | 2026-01-13 00:42:24.733163 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-13 00:42:24.733211 | orchestrator | Tuesday 13 January 2026 00:42:21 +0000 (0:00:00.143) 0:01:09.623 ******* 2026-01-13 00:42:24.733233 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:24.733253 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:24.733272 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:24.733292 | orchestrator | 2026-01-13 00:42:24.733313 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-13 00:42:24.733334 | orchestrator | Tuesday 13 January 2026 00:42:22 +0000 (0:00:00.147) 0:01:09.771 ******* 2026-01-13 00:42:24.733355 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:24.733377 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:24.733399 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:24.733418 | orchestrator | 2026-01-13 00:42:24.733439 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-13 00:42:24.733461 | orchestrator | Tuesday 13 January 2026 00:42:22 +0000 (0:00:00.358) 0:01:10.129 ******* 2026-01-13 00:42:24.733478 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:24.733496 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:24.733516 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:24.733538 | orchestrator | 2026-01-13 00:42:24.733560 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-13 00:42:24.733580 | orchestrator | Tuesday 13 January 2026 00:42:22 +0000 (0:00:00.164) 0:01:10.294 ******* 2026-01-13 00:42:24.733602 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:24.733625 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:24.733647 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:24.733663 | orchestrator | 2026-01-13 00:42:24.733677 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-13 00:42:24.733744 | orchestrator | Tuesday 13 January 2026 00:42:22 +0000 (0:00:00.145) 0:01:10.439 ******* 2026-01-13 00:42:24.733756 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:42:24.733768 | orchestrator | 2026-01-13 00:42:24.733779 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-13 00:42:24.733790 | orchestrator | Tuesday 13 January 2026 00:42:23 +0000 (0:00:00.525) 0:01:10.965 ******* 2026-01-13 00:42:24.733800 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:42:24.733811 | orchestrator | 2026-01-13 00:42:24.733822 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-13 00:42:24.733833 | orchestrator | Tuesday 13 January 2026 00:42:23 +0000 (0:00:00.521) 0:01:11.487 ******* 2026-01-13 00:42:24.733843 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:42:24.733854 | orchestrator | 2026-01-13 00:42:24.733864 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-13 00:42:24.733875 | orchestrator | Tuesday 13 January 2026 00:42:23 +0000 (0:00:00.133) 0:01:11.621 ******* 2026-01-13 00:42:24.733886 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'vg_name': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'}) 2026-01-13 00:42:24.733898 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'vg_name': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'}) 2026-01-13 00:42:24.733919 | orchestrator | 2026-01-13 00:42:24.733930 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-13 00:42:24.733941 | orchestrator | Tuesday 13 January 2026 00:42:24 +0000 (0:00:00.165) 0:01:11.787 ******* 2026-01-13 00:42:24.733992 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:24.734004 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:24.734108 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:24.734123 | orchestrator | 2026-01-13 00:42:24.734134 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-13 00:42:24.734146 | orchestrator | Tuesday 13 January 2026 00:42:24 +0000 (0:00:00.148) 0:01:11.936 ******* 2026-01-13 00:42:24.734157 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:24.734167 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:24.734178 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:24.734189 | orchestrator | 2026-01-13 00:42:24.734206 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-13 00:42:24.734226 | orchestrator | Tuesday 13 January 2026 00:42:24 +0000 (0:00:00.151) 0:01:12.087 ******* 2026-01-13 00:42:24.734246 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'})  2026-01-13 00:42:24.734257 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'})  2026-01-13 00:42:24.734268 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:24.734279 | orchestrator | 2026-01-13 00:42:24.734292 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-13 00:42:24.734311 | orchestrator | Tuesday 13 January 2026 00:42:24 +0000 (0:00:00.165) 0:01:12.252 ******* 2026-01-13 00:42:24.734329 | orchestrator | ok: [testbed-node-5] => { 2026-01-13 00:42:24.734340 | orchestrator |  "lvm_report": { 2026-01-13 00:42:24.734351 | orchestrator |  "lv": [ 2026-01-13 00:42:24.734363 | orchestrator |  { 2026-01-13 00:42:24.734390 | orchestrator |  "lv_name": "osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677", 2026-01-13 00:42:24.734410 | orchestrator |  "vg_name": "ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677" 2026-01-13 00:42:24.734429 | orchestrator |  }, 2026-01-13 00:42:24.734450 | orchestrator |  { 2026-01-13 00:42:24.734470 | orchestrator |  "lv_name": "osd-block-e91d200a-cf56-55df-b2f8-08f15361112f", 2026-01-13 00:42:24.734490 | orchestrator |  "vg_name": "ceph-e91d200a-cf56-55df-b2f8-08f15361112f" 2026-01-13 00:42:24.734508 | orchestrator |  } 2026-01-13 00:42:24.734521 | orchestrator |  ], 2026-01-13 00:42:24.734540 | orchestrator |  "pv": [ 2026-01-13 00:42:24.734560 | orchestrator |  { 2026-01-13 00:42:24.734580 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-13 00:42:24.734597 | orchestrator |  "vg_name": "ceph-e91d200a-cf56-55df-b2f8-08f15361112f" 2026-01-13 00:42:24.734617 | orchestrator |  }, 2026-01-13 00:42:24.734636 | orchestrator |  { 2026-01-13 00:42:24.734656 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-13 00:42:24.734675 | orchestrator |  "vg_name": "ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677" 2026-01-13 00:42:24.734722 | orchestrator |  } 2026-01-13 00:42:24.734742 | orchestrator |  ] 2026-01-13 00:42:24.734786 | orchestrator |  } 2026-01-13 00:42:24.734806 | orchestrator | } 2026-01-13 00:42:24.734826 | orchestrator | 2026-01-13 00:42:24.734845 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:42:24.734864 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-13 00:42:24.734883 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-13 00:42:24.734898 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-13 00:42:24.734909 | orchestrator | 2026-01-13 00:42:24.734920 | orchestrator | 2026-01-13 00:42:24.734930 | orchestrator | 2026-01-13 00:42:24.734941 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:42:24.734952 | orchestrator | Tuesday 13 January 2026 00:42:24 +0000 (0:00:00.140) 0:01:12.393 ******* 2026-01-13 00:42:24.734962 | orchestrator | =============================================================================== 2026-01-13 00:42:24.734973 | orchestrator | Create block VGs -------------------------------------------------------- 5.63s 2026-01-13 00:42:24.734983 | orchestrator | Create block LVs -------------------------------------------------------- 4.13s 2026-01-13 00:42:24.734994 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.79s 2026-01-13 00:42:24.735004 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.66s 2026-01-13 00:42:24.735015 | orchestrator | Add known partitions to the list of available block devices ------------- 1.61s 2026-01-13 00:42:24.735026 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.60s 2026-01-13 00:42:24.735037 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2026-01-13 00:42:24.735047 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.53s 2026-01-13 00:42:24.735070 | orchestrator | Add known links to the list of available block devices ------------------ 1.34s 2026-01-13 00:42:25.081017 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2026-01-13 00:42:25.081115 | orchestrator | Print LVM report data --------------------------------------------------- 1.00s 2026-01-13 00:42:25.081130 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2026-01-13 00:42:25.081141 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2026-01-13 00:42:25.081151 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.80s 2026-01-13 00:42:25.081162 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-01-13 00:42:25.081173 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.75s 2026-01-13 00:42:25.081183 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-01-13 00:42:25.081194 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.70s 2026-01-13 00:42:25.081204 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-01-13 00:42:25.081215 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.69s 2026-01-13 00:42:37.321404 | orchestrator | 2026-01-13 00:42:37 | INFO  | Task 2f227694-2bc5-4186-89a0-6a2c85f9a271 (facts) was prepared for execution. 2026-01-13 00:42:37.321481 | orchestrator | 2026-01-13 00:42:37 | INFO  | It takes a moment until task 2f227694-2bc5-4186-89a0-6a2c85f9a271 (facts) has been started and output is visible here. 2026-01-13 00:42:49.601815 | orchestrator | 2026-01-13 00:42:49.601958 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-13 00:42:49.601987 | orchestrator | 2026-01-13 00:42:49.602000 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-13 00:42:49.602012 | orchestrator | Tuesday 13 January 2026 00:42:41 +0000 (0:00:00.272) 0:00:00.272 ******* 2026-01-13 00:42:49.602134 | orchestrator | ok: [testbed-manager] 2026-01-13 00:42:49.602162 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:42:49.602180 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:42:49.602197 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:42:49.602213 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:42:49.602231 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:42:49.602248 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:42:49.602265 | orchestrator | 2026-01-13 00:42:49.602303 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-13 00:42:49.602323 | orchestrator | Tuesday 13 January 2026 00:42:42 +0000 (0:00:01.049) 0:00:01.322 ******* 2026-01-13 00:42:49.602344 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:42:49.602366 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:42:49.602386 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:42:49.602404 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:42:49.602416 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:42:49.602428 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:49.602440 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:49.602452 | orchestrator | 2026-01-13 00:42:49.602464 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-13 00:42:49.602477 | orchestrator | 2026-01-13 00:42:49.602489 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-13 00:42:49.602501 | orchestrator | Tuesday 13 January 2026 00:42:43 +0000 (0:00:01.226) 0:00:02.548 ******* 2026-01-13 00:42:49.602513 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:42:49.602526 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:42:49.602537 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:42:49.602549 | orchestrator | ok: [testbed-manager] 2026-01-13 00:42:49.602561 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:42:49.602573 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:42:49.602585 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:42:49.602597 | orchestrator | 2026-01-13 00:42:49.602609 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-13 00:42:49.602621 | orchestrator | 2026-01-13 00:42:49.602633 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-13 00:42:49.602645 | orchestrator | Tuesday 13 January 2026 00:42:48 +0000 (0:00:04.955) 0:00:07.503 ******* 2026-01-13 00:42:49.602657 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:42:49.602669 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:42:49.602681 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:42:49.602777 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:42:49.602788 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:42:49.602798 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:42:49.602809 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:42:49.602819 | orchestrator | 2026-01-13 00:42:49.602830 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:42:49.602841 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:42:49.602853 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:42:49.602863 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:42:49.602874 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:42:49.602885 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:42:49.602899 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:42:49.602932 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:42:49.602950 | orchestrator | 2026-01-13 00:42:49.602969 | orchestrator | 2026-01-13 00:42:49.602989 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:42:49.603008 | orchestrator | Tuesday 13 January 2026 00:42:49 +0000 (0:00:00.482) 0:00:07.985 ******* 2026-01-13 00:42:49.603026 | orchestrator | =============================================================================== 2026-01-13 00:42:49.603043 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.96s 2026-01-13 00:42:49.603054 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2026-01-13 00:42:49.603065 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.05s 2026-01-13 00:42:49.603075 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2026-01-13 00:43:02.027737 | orchestrator | 2026-01-13 00:43:02 | INFO  | Task be4c30f5-fe4a-4450-a3bd-83c6374cf20b (frr) was prepared for execution. 2026-01-13 00:43:02.028052 | orchestrator | 2026-01-13 00:43:02 | INFO  | It takes a moment until task be4c30f5-fe4a-4450-a3bd-83c6374cf20b (frr) has been started and output is visible here. 2026-01-13 00:43:30.356811 | orchestrator | 2026-01-13 00:43:30.356911 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-13 00:43:30.356919 | orchestrator | 2026-01-13 00:43:30.356924 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-13 00:43:30.356929 | orchestrator | Tuesday 13 January 2026 00:43:06 +0000 (0:00:00.265) 0:00:00.265 ******* 2026-01-13 00:43:30.356934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-13 00:43:30.356939 | orchestrator | 2026-01-13 00:43:30.356943 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-13 00:43:30.356947 | orchestrator | Tuesday 13 January 2026 00:43:07 +0000 (0:00:00.253) 0:00:00.519 ******* 2026-01-13 00:43:30.356951 | orchestrator | changed: [testbed-manager] 2026-01-13 00:43:30.356971 | orchestrator | 2026-01-13 00:43:30.356975 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-13 00:43:30.356991 | orchestrator | Tuesday 13 January 2026 00:43:08 +0000 (0:00:01.301) 0:00:01.821 ******* 2026-01-13 00:43:30.356995 | orchestrator | changed: [testbed-manager] 2026-01-13 00:43:30.356999 | orchestrator | 2026-01-13 00:43:30.357003 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-13 00:43:30.357007 | orchestrator | Tuesday 13 January 2026 00:43:18 +0000 (0:00:10.520) 0:00:12.341 ******* 2026-01-13 00:43:30.357011 | orchestrator | ok: [testbed-manager] 2026-01-13 00:43:30.357015 | orchestrator | 2026-01-13 00:43:30.357019 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-13 00:43:30.357023 | orchestrator | Tuesday 13 January 2026 00:43:20 +0000 (0:00:01.017) 0:00:13.359 ******* 2026-01-13 00:43:30.357026 | orchestrator | changed: [testbed-manager] 2026-01-13 00:43:30.357030 | orchestrator | 2026-01-13 00:43:30.357034 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-13 00:43:30.357038 | orchestrator | Tuesday 13 January 2026 00:43:20 +0000 (0:00:00.973) 0:00:14.333 ******* 2026-01-13 00:43:30.357041 | orchestrator | ok: [testbed-manager] 2026-01-13 00:43:30.357045 | orchestrator | 2026-01-13 00:43:30.357049 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-13 00:43:30.357054 | orchestrator | Tuesday 13 January 2026 00:43:22 +0000 (0:00:01.200) 0:00:15.533 ******* 2026-01-13 00:43:30.357057 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:43:30.357061 | orchestrator | 2026-01-13 00:43:30.357065 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-13 00:43:30.357070 | orchestrator | Tuesday 13 January 2026 00:43:22 +0000 (0:00:00.163) 0:00:15.697 ******* 2026-01-13 00:43:30.357122 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:43:30.357130 | orchestrator | 2026-01-13 00:43:30.357137 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-13 00:43:30.357145 | orchestrator | Tuesday 13 January 2026 00:43:22 +0000 (0:00:00.163) 0:00:15.861 ******* 2026-01-13 00:43:30.357155 | orchestrator | changed: [testbed-manager] 2026-01-13 00:43:30.357162 | orchestrator | 2026-01-13 00:43:30.357169 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-13 00:43:30.357176 | orchestrator | Tuesday 13 January 2026 00:43:23 +0000 (0:00:00.982) 0:00:16.844 ******* 2026-01-13 00:43:30.357183 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-13 00:43:30.357189 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-13 00:43:30.357193 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-13 00:43:30.357198 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-13 00:43:30.357204 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-13 00:43:30.357211 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-13 00:43:30.357217 | orchestrator | 2026-01-13 00:43:30.357222 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-13 00:43:30.357228 | orchestrator | Tuesday 13 January 2026 00:43:26 +0000 (0:00:03.331) 0:00:20.176 ******* 2026-01-13 00:43:30.357234 | orchestrator | ok: [testbed-manager] 2026-01-13 00:43:30.357240 | orchestrator | 2026-01-13 00:43:30.357246 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-13 00:43:30.357252 | orchestrator | Tuesday 13 January 2026 00:43:28 +0000 (0:00:01.694) 0:00:21.870 ******* 2026-01-13 00:43:30.357258 | orchestrator | changed: [testbed-manager] 2026-01-13 00:43:30.357264 | orchestrator | 2026-01-13 00:43:30.357270 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:43:30.357278 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:43:30.357284 | orchestrator | 2026-01-13 00:43:30.357290 | orchestrator | 2026-01-13 00:43:30.357294 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:43:30.357298 | orchestrator | Tuesday 13 January 2026 00:43:30 +0000 (0:00:01.494) 0:00:23.365 ******* 2026-01-13 00:43:30.357302 | orchestrator | =============================================================================== 2026-01-13 00:43:30.357305 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.52s 2026-01-13 00:43:30.357309 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.33s 2026-01-13 00:43:30.357314 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.69s 2026-01-13 00:43:30.357318 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.49s 2026-01-13 00:43:30.357322 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.30s 2026-01-13 00:43:30.357344 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.20s 2026-01-13 00:43:30.357352 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.02s 2026-01-13 00:43:30.357358 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.98s 2026-01-13 00:43:30.357365 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.97s 2026-01-13 00:43:30.357372 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.25s 2026-01-13 00:43:30.357379 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-01-13 00:43:30.357386 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-01-13 00:43:30.742639 | orchestrator | 2026-01-13 00:43:30.746963 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Jan 13 00:43:30 UTC 2026 2026-01-13 00:43:30.747047 | orchestrator | 2026-01-13 00:43:32.818189 | orchestrator | 2026-01-13 00:43:32 | INFO  | Collection nutshell is prepared for execution 2026-01-13 00:43:32.818270 | orchestrator | 2026-01-13 00:43:32 | INFO  | A [0] - dotfiles 2026-01-13 00:43:42.909782 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [0] - homer 2026-01-13 00:43:42.909877 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [0] - netdata 2026-01-13 00:43:42.909887 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [0] - openstackclient 2026-01-13 00:43:42.910175 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [0] - phpmyadmin 2026-01-13 00:43:42.910240 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [0] - common 2026-01-13 00:43:42.915447 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [1] -- loadbalancer 2026-01-13 00:43:42.915779 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [2] --- opensearch 2026-01-13 00:43:42.915816 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [2] --- mariadb-ng 2026-01-13 00:43:42.915970 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [3] ---- horizon 2026-01-13 00:43:42.916327 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [3] ---- keystone 2026-01-13 00:43:42.916611 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [4] ----- neutron 2026-01-13 00:43:42.917101 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [5] ------ wait-for-nova 2026-01-13 00:43:42.917532 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [6] ------- octavia 2026-01-13 00:43:42.919225 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [4] ----- barbican 2026-01-13 00:43:42.919475 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [4] ----- designate 2026-01-13 00:43:42.919626 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [4] ----- ironic 2026-01-13 00:43:42.919931 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [4] ----- placement 2026-01-13 00:43:42.920142 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [4] ----- magnum 2026-01-13 00:43:42.921006 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [1] -- openvswitch 2026-01-13 00:43:42.921232 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [2] --- ovn 2026-01-13 00:43:42.922263 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [1] -- memcached 2026-01-13 00:43:42.922341 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [1] -- redis 2026-01-13 00:43:42.922355 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [1] -- rabbitmq-ng 2026-01-13 00:43:42.922888 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [0] - kubernetes 2026-01-13 00:43:42.926088 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [1] -- kubeconfig 2026-01-13 00:43:42.926133 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [1] -- copy-kubeconfig 2026-01-13 00:43:42.926297 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [0] - ceph 2026-01-13 00:43:42.929327 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [1] -- ceph-pools 2026-01-13 00:43:42.929623 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [2] --- copy-ceph-keys 2026-01-13 00:43:42.929812 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [3] ---- cephclient 2026-01-13 00:43:42.930109 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-13 00:43:42.930302 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [4] ----- wait-for-keystone 2026-01-13 00:43:42.930741 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-13 00:43:42.931081 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [5] ------ glance 2026-01-13 00:43:42.931129 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [5] ------ cinder 2026-01-13 00:43:42.931141 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [5] ------ nova 2026-01-13 00:43:42.931516 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [4] ----- prometheus 2026-01-13 00:43:42.931613 | orchestrator | 2026-01-13 00:43:42 | INFO  | A [5] ------ grafana 2026-01-13 00:43:43.153995 | orchestrator | 2026-01-13 00:43:43 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-13 00:43:43.154114 | orchestrator | 2026-01-13 00:43:43 | INFO  | Tasks are running in the background 2026-01-13 00:43:46.040781 | orchestrator | 2026-01-13 00:43:46 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-13 00:43:48.160085 | orchestrator | 2026-01-13 00:43:48 | INFO  | Task f9d91f5a-e190-4cd8-a09b-07a90f2c0ae3 is in state STARTED 2026-01-13 00:43:48.160982 | orchestrator | 2026-01-13 00:43:48 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:43:48.161535 | orchestrator | 2026-01-13 00:43:48 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:43:48.161947 | orchestrator | 2026-01-13 00:43:48 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:43:48.162376 | orchestrator | 2026-01-13 00:43:48 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:43:48.162857 | orchestrator | 2026-01-13 00:43:48 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:43:48.163787 | orchestrator | 2026-01-13 00:43:48 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:43:48.163809 | orchestrator | 2026-01-13 00:43:48 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:43:51.202682 | orchestrator | 2026-01-13 00:43:51 | INFO  | Task f9d91f5a-e190-4cd8-a09b-07a90f2c0ae3 is in state STARTED 2026-01-13 00:43:51.203501 | orchestrator | 2026-01-13 00:43:51 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:43:51.205525 | orchestrator | 2026-01-13 00:43:51 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:43:51.205728 | orchestrator | 2026-01-13 00:43:51 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:43:51.206751 | orchestrator | 2026-01-13 00:43:51 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:43:51.212290 | orchestrator | 2026-01-13 00:43:51 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:43:51.212502 | orchestrator | 2026-01-13 00:43:51 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:43:51.212515 | orchestrator | 2026-01-13 00:43:51 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:43:54.246146 | orchestrator | 2026-01-13 00:43:54 | INFO  | Task f9d91f5a-e190-4cd8-a09b-07a90f2c0ae3 is in state STARTED 2026-01-13 00:43:54.248853 | orchestrator | 2026-01-13 00:43:54 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:43:54.249120 | orchestrator | 2026-01-13 00:43:54 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:43:54.249726 | orchestrator | 2026-01-13 00:43:54 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:43:54.250273 | orchestrator | 2026-01-13 00:43:54 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:43:54.251153 | orchestrator | 2026-01-13 00:43:54 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:43:54.252968 | orchestrator | 2026-01-13 00:43:54 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:43:54.253030 | orchestrator | 2026-01-13 00:43:54 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:43:57.300233 | orchestrator | 2026-01-13 00:43:57 | INFO  | Task f9d91f5a-e190-4cd8-a09b-07a90f2c0ae3 is in state STARTED 2026-01-13 00:43:57.302122 | orchestrator | 2026-01-13 00:43:57 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:43:57.302160 | orchestrator | 2026-01-13 00:43:57 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:43:57.302585 | orchestrator | 2026-01-13 00:43:57 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:43:57.304568 | orchestrator | 2026-01-13 00:43:57 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:43:57.305110 | orchestrator | 2026-01-13 00:43:57 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:43:57.308111 | orchestrator | 2026-01-13 00:43:57 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:43:57.308150 | orchestrator | 2026-01-13 00:43:57 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:00.481989 | orchestrator | 2026-01-13 00:44:00 | INFO  | Task f9d91f5a-e190-4cd8-a09b-07a90f2c0ae3 is in state STARTED 2026-01-13 00:44:00.482114 | orchestrator | 2026-01-13 00:44:00 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:44:00.482121 | orchestrator | 2026-01-13 00:44:00 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:00.482125 | orchestrator | 2026-01-13 00:44:00 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:00.482129 | orchestrator | 2026-01-13 00:44:00 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:00.482133 | orchestrator | 2026-01-13 00:44:00 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:00.482137 | orchestrator | 2026-01-13 00:44:00 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:00.482141 | orchestrator | 2026-01-13 00:44:00 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:03.463596 | orchestrator | 2026-01-13 00:44:03 | INFO  | Task f9d91f5a-e190-4cd8-a09b-07a90f2c0ae3 is in state STARTED 2026-01-13 00:44:03.464566 | orchestrator | 2026-01-13 00:44:03 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:44:03.464622 | orchestrator | 2026-01-13 00:44:03 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:03.466541 | orchestrator | 2026-01-13 00:44:03 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:03.469250 | orchestrator | 2026-01-13 00:44:03 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:03.471535 | orchestrator | 2026-01-13 00:44:03 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:03.472293 | orchestrator | 2026-01-13 00:44:03 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:03.472329 | orchestrator | 2026-01-13 00:44:03 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:06.526439 | orchestrator | 2026-01-13 00:44:06 | INFO  | Task f9d91f5a-e190-4cd8-a09b-07a90f2c0ae3 is in state STARTED 2026-01-13 00:44:06.530390 | orchestrator | 2026-01-13 00:44:06 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:44:06.532742 | orchestrator | 2026-01-13 00:44:06 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:06.582325 | orchestrator | 2026-01-13 00:44:06 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:06.582428 | orchestrator | 2026-01-13 00:44:06 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:06.582434 | orchestrator | 2026-01-13 00:44:06 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:06.582438 | orchestrator | 2026-01-13 00:44:06 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:06.582444 | orchestrator | 2026-01-13 00:44:06 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:09.698938 | orchestrator | 2026-01-13 00:44:09.699033 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-13 00:44:09.699045 | orchestrator | 2026-01-13 00:44:09.699053 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-13 00:44:09.699062 | orchestrator | Tuesday 13 January 2026 00:43:54 +0000 (0:00:00.446) 0:00:00.446 ******* 2026-01-13 00:44:09.699070 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:44:09.699079 | orchestrator | changed: [testbed-manager] 2026-01-13 00:44:09.699087 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:44:09.699095 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:44:09.699103 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:44:09.699110 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:44:09.699119 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:44:09.699126 | orchestrator | 2026-01-13 00:44:09.699134 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-13 00:44:09.699142 | orchestrator | Tuesday 13 January 2026 00:43:58 +0000 (0:00:03.847) 0:00:04.293 ******* 2026-01-13 00:44:09.699150 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-13 00:44:09.699159 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-13 00:44:09.699167 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-13 00:44:09.699174 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-13 00:44:09.699182 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-13 00:44:09.699190 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-13 00:44:09.699199 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-13 00:44:09.699213 | orchestrator | 2026-01-13 00:44:09.699224 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-13 00:44:09.699234 | orchestrator | Tuesday 13 January 2026 00:44:00 +0000 (0:00:02.188) 0:00:06.481 ******* 2026-01-13 00:44:09.699245 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-13 00:43:58.814918', 'end': '2026-01-13 00:43:58.820429', 'delta': '0:00:00.005511', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-13 00:44:09.699266 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-13 00:43:58.803859', 'end': '2026-01-13 00:43:58.813355', 'delta': '0:00:00.009496', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-13 00:44:09.699301 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-13 00:43:58.817015', 'end': '2026-01-13 00:43:58.826967', 'delta': '0:00:00.009952', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-13 00:44:09.699351 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-13 00:43:59.124810', 'end': '2026-01-13 00:43:59.132657', 'delta': '0:00:00.007847', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-13 00:44:09.699369 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-13 00:43:59.630323', 'end': '2026-01-13 00:43:59.637507', 'delta': '0:00:00.007184', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-13 00:44:09.699382 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-13 00:43:59.371314', 'end': '2026-01-13 00:43:59.375075', 'delta': '0:00:00.003761', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-13 00:44:09.699430 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-13 00:44:00.067080', 'end': '2026-01-13 00:44:00.075232', 'delta': '0:00:00.008152', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-13 00:44:09.699463 | orchestrator | 2026-01-13 00:44:09.699479 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-13 00:44:09.699494 | orchestrator | Tuesday 13 January 2026 00:44:02 +0000 (0:00:01.866) 0:00:08.348 ******* 2026-01-13 00:44:09.699508 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-13 00:44:09.699523 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-13 00:44:09.699535 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-13 00:44:09.699544 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-13 00:44:09.699553 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-13 00:44:09.699562 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-13 00:44:09.699571 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-13 00:44:09.699580 | orchestrator | 2026-01-13 00:44:09.699590 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-13 00:44:09.699626 | orchestrator | Tuesday 13 January 2026 00:44:04 +0000 (0:00:02.001) 0:00:10.349 ******* 2026-01-13 00:44:09.699635 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-13 00:44:09.699645 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-13 00:44:09.699677 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-13 00:44:09.699686 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-13 00:44:09.699695 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-13 00:44:09.699704 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-13 00:44:09.699712 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-13 00:44:09.699720 | orchestrator | 2026-01-13 00:44:09.699728 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:44:09.699745 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:44:09.699755 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:44:09.699763 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:44:09.699771 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:44:09.699779 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:44:09.699786 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:44:09.699794 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:44:09.699802 | orchestrator | 2026-01-13 00:44:09.699810 | orchestrator | 2026-01-13 00:44:09.699818 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:44:09.699826 | orchestrator | Tuesday 13 January 2026 00:44:07 +0000 (0:00:03.360) 0:00:13.710 ******* 2026-01-13 00:44:09.699834 | orchestrator | =============================================================================== 2026-01-13 00:44:09.699841 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.85s 2026-01-13 00:44:09.699871 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.36s 2026-01-13 00:44:09.699880 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.19s 2026-01-13 00:44:09.699887 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.00s 2026-01-13 00:44:09.699895 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.87s 2026-01-13 00:44:09.699903 | orchestrator | 2026-01-13 00:44:09 | INFO  | Task f9d91f5a-e190-4cd8-a09b-07a90f2c0ae3 is in state SUCCESS 2026-01-13 00:44:09.699911 | orchestrator | 2026-01-13 00:44:09 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:44:09.699920 | orchestrator | 2026-01-13 00:44:09 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:09.701310 | orchestrator | 2026-01-13 00:44:09 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:09.701828 | orchestrator | 2026-01-13 00:44:09 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:09.702717 | orchestrator | 2026-01-13 00:44:09 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:09.703609 | orchestrator | 2026-01-13 00:44:09 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:09.704254 | orchestrator | 2026-01-13 00:44:09 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:09.704771 | orchestrator | 2026-01-13 00:44:09 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:12.748988 | orchestrator | 2026-01-13 00:44:12 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:44:12.749181 | orchestrator | 2026-01-13 00:44:12 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:12.751879 | orchestrator | 2026-01-13 00:44:12 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:12.753102 | orchestrator | 2026-01-13 00:44:12 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:12.755083 | orchestrator | 2026-01-13 00:44:12 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:12.757643 | orchestrator | 2026-01-13 00:44:12 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:12.761554 | orchestrator | 2026-01-13 00:44:12 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:12.761609 | orchestrator | 2026-01-13 00:44:12 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:15.993064 | orchestrator | 2026-01-13 00:44:15 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:44:15.993149 | orchestrator | 2026-01-13 00:44:15 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:15.993158 | orchestrator | 2026-01-13 00:44:15 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:15.993164 | orchestrator | 2026-01-13 00:44:15 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:15.993170 | orchestrator | 2026-01-13 00:44:15 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:15.993176 | orchestrator | 2026-01-13 00:44:15 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:15.993182 | orchestrator | 2026-01-13 00:44:15 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:15.993189 | orchestrator | 2026-01-13 00:44:15 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:18.855234 | orchestrator | 2026-01-13 00:44:18 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:44:18.855530 | orchestrator | 2026-01-13 00:44:18 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:18.856340 | orchestrator | 2026-01-13 00:44:18 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:18.856888 | orchestrator | 2026-01-13 00:44:18 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:18.859000 | orchestrator | 2026-01-13 00:44:18 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:18.859408 | orchestrator | 2026-01-13 00:44:18 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:18.860505 | orchestrator | 2026-01-13 00:44:18 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:18.860556 | orchestrator | 2026-01-13 00:44:18 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:21.913537 | orchestrator | 2026-01-13 00:44:21 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:44:21.913687 | orchestrator | 2026-01-13 00:44:21 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:21.914176 | orchestrator | 2026-01-13 00:44:21 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:21.914894 | orchestrator | 2026-01-13 00:44:21 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:21.915380 | orchestrator | 2026-01-13 00:44:21 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:21.916093 | orchestrator | 2026-01-13 00:44:21 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:21.916919 | orchestrator | 2026-01-13 00:44:21 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:21.916980 | orchestrator | 2026-01-13 00:44:21 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:24.999380 | orchestrator | 2026-01-13 00:44:24 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:44:24.999582 | orchestrator | 2026-01-13 00:44:24 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:24.999613 | orchestrator | 2026-01-13 00:44:24 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:24.999632 | orchestrator | 2026-01-13 00:44:24 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:24.999731 | orchestrator | 2026-01-13 00:44:24 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:24.999752 | orchestrator | 2026-01-13 00:44:24 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:24.999771 | orchestrator | 2026-01-13 00:44:24 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:24.999791 | orchestrator | 2026-01-13 00:44:24 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:28.054356 | orchestrator | 2026-01-13 00:44:28 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:44:28.059035 | orchestrator | 2026-01-13 00:44:28 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:28.065691 | orchestrator | 2026-01-13 00:44:28 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:28.067221 | orchestrator | 2026-01-13 00:44:28 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:28.073109 | orchestrator | 2026-01-13 00:44:28 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:28.075993 | orchestrator | 2026-01-13 00:44:28 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:28.079874 | orchestrator | 2026-01-13 00:44:28 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:28.079926 | orchestrator | 2026-01-13 00:44:28 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:31.115558 | orchestrator | 2026-01-13 00:44:31 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:44:31.115773 | orchestrator | 2026-01-13 00:44:31 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:31.115791 | orchestrator | 2026-01-13 00:44:31 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:31.115804 | orchestrator | 2026-01-13 00:44:31 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:31.116541 | orchestrator | 2026-01-13 00:44:31 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:31.117034 | orchestrator | 2026-01-13 00:44:31 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:31.117600 | orchestrator | 2026-01-13 00:44:31 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:31.117631 | orchestrator | 2026-01-13 00:44:31 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:34.151301 | orchestrator | 2026-01-13 00:44:34 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state STARTED 2026-01-13 00:44:34.151353 | orchestrator | 2026-01-13 00:44:34 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:34.151359 | orchestrator | 2026-01-13 00:44:34 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:34.151364 | orchestrator | 2026-01-13 00:44:34 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:34.151368 | orchestrator | 2026-01-13 00:44:34 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:34.151373 | orchestrator | 2026-01-13 00:44:34 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:34.151377 | orchestrator | 2026-01-13 00:44:34 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:34.151382 | orchestrator | 2026-01-13 00:44:34 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:37.253106 | orchestrator | 2026-01-13 00:44:37 | INFO  | Task de2d15df-73e5-4f2c-8364-14e2090e4924 is in state SUCCESS 2026-01-13 00:44:37.253245 | orchestrator | 2026-01-13 00:44:37 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:37.253261 | orchestrator | 2026-01-13 00:44:37 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:37.253273 | orchestrator | 2026-01-13 00:44:37 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:37.253285 | orchestrator | 2026-01-13 00:44:37 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:37.253296 | orchestrator | 2026-01-13 00:44:37 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:37.253307 | orchestrator | 2026-01-13 00:44:37 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:37.253318 | orchestrator | 2026-01-13 00:44:37 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:40.312505 | orchestrator | 2026-01-13 00:44:40 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:40.333478 | orchestrator | 2026-01-13 00:44:40 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:40.342750 | orchestrator | 2026-01-13 00:44:40 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:40.343420 | orchestrator | 2026-01-13 00:44:40 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:40.344426 | orchestrator | 2026-01-13 00:44:40 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:40.345520 | orchestrator | 2026-01-13 00:44:40 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:40.345564 | orchestrator | 2026-01-13 00:44:40 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:43.388962 | orchestrator | 2026-01-13 00:44:43 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:43.389092 | orchestrator | 2026-01-13 00:44:43 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:43.389914 | orchestrator | 2026-01-13 00:44:43 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:43.392584 | orchestrator | 2026-01-13 00:44:43 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state STARTED 2026-01-13 00:44:43.396881 | orchestrator | 2026-01-13 00:44:43 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:43.402325 | orchestrator | 2026-01-13 00:44:43 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:43.402382 | orchestrator | 2026-01-13 00:44:43 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:46.453795 | orchestrator | 2026-01-13 00:44:46 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:46.453905 | orchestrator | 2026-01-13 00:44:46 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:46.455746 | orchestrator | 2026-01-13 00:44:46 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:46.456215 | orchestrator | 2026-01-13 00:44:46 | INFO  | Task 18cb1beb-312f-4f92-8e89-583212497cc8 is in state SUCCESS 2026-01-13 00:44:46.459925 | orchestrator | 2026-01-13 00:44:46 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:46.461054 | orchestrator | 2026-01-13 00:44:46 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:46.461100 | orchestrator | 2026-01-13 00:44:46 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:49.535486 | orchestrator | 2026-01-13 00:44:49 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:49.536356 | orchestrator | 2026-01-13 00:44:49 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:49.537615 | orchestrator | 2026-01-13 00:44:49 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:49.538881 | orchestrator | 2026-01-13 00:44:49 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:49.539892 | orchestrator | 2026-01-13 00:44:49 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:49.539935 | orchestrator | 2026-01-13 00:44:49 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:52.577015 | orchestrator | 2026-01-13 00:44:52 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:52.577424 | orchestrator | 2026-01-13 00:44:52 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:52.578771 | orchestrator | 2026-01-13 00:44:52 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:52.579709 | orchestrator | 2026-01-13 00:44:52 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:52.581003 | orchestrator | 2026-01-13 00:44:52 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:52.581048 | orchestrator | 2026-01-13 00:44:52 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:55.642066 | orchestrator | 2026-01-13 00:44:55 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:55.643562 | orchestrator | 2026-01-13 00:44:55 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:55.645560 | orchestrator | 2026-01-13 00:44:55 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:55.647608 | orchestrator | 2026-01-13 00:44:55 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:55.648823 | orchestrator | 2026-01-13 00:44:55 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:55.648850 | orchestrator | 2026-01-13 00:44:55 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:44:58.694156 | orchestrator | 2026-01-13 00:44:58 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:44:58.696433 | orchestrator | 2026-01-13 00:44:58 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:44:58.698967 | orchestrator | 2026-01-13 00:44:58 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:44:58.701674 | orchestrator | 2026-01-13 00:44:58 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:44:58.704023 | orchestrator | 2026-01-13 00:44:58 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:44:58.704065 | orchestrator | 2026-01-13 00:44:58 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:01.759732 | orchestrator | 2026-01-13 00:45:01 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:01.759803 | orchestrator | 2026-01-13 00:45:01 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:45:01.759812 | orchestrator | 2026-01-13 00:45:01 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:01.759819 | orchestrator | 2026-01-13 00:45:01 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:01.759826 | orchestrator | 2026-01-13 00:45:01 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:45:01.759832 | orchestrator | 2026-01-13 00:45:01 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:04.842480 | orchestrator | 2026-01-13 00:45:04 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:04.843337 | orchestrator | 2026-01-13 00:45:04 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:45:04.844240 | orchestrator | 2026-01-13 00:45:04 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:04.845537 | orchestrator | 2026-01-13 00:45:04 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:04.846538 | orchestrator | 2026-01-13 00:45:04 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:45:04.846574 | orchestrator | 2026-01-13 00:45:04 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:07.914863 | orchestrator | 2026-01-13 00:45:07 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:07.914957 | orchestrator | 2026-01-13 00:45:07 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:45:07.914970 | orchestrator | 2026-01-13 00:45:07 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:07.915488 | orchestrator | 2026-01-13 00:45:07 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:07.916104 | orchestrator | 2026-01-13 00:45:07 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:45:07.916136 | orchestrator | 2026-01-13 00:45:07 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:10.949330 | orchestrator | 2026-01-13 00:45:10 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:10.949672 | orchestrator | 2026-01-13 00:45:10 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:45:10.951712 | orchestrator | 2026-01-13 00:45:10 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:10.951738 | orchestrator | 2026-01-13 00:45:10 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:10.952185 | orchestrator | 2026-01-13 00:45:10 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:45:10.952200 | orchestrator | 2026-01-13 00:45:10 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:13.978383 | orchestrator | 2026-01-13 00:45:13 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:13.987466 | orchestrator | 2026-01-13 00:45:13 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state STARTED 2026-01-13 00:45:13.987541 | orchestrator | 2026-01-13 00:45:13 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:13.987547 | orchestrator | 2026-01-13 00:45:13 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:13.987552 | orchestrator | 2026-01-13 00:45:13 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:45:13.987557 | orchestrator | 2026-01-13 00:45:13 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:17.035506 | orchestrator | 2026-01-13 00:45:17 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:17.036876 | orchestrator | 2026-01-13 00:45:17 | INFO  | Task 77215c53-ff43-494e-9cab-86bfdd34cec3 is in state SUCCESS 2026-01-13 00:45:17.037370 | orchestrator | 2026-01-13 00:45:17.037392 | orchestrator | 2026-01-13 00:45:17.037401 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-13 00:45:17.037409 | orchestrator | 2026-01-13 00:45:17.037416 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-13 00:45:17.037423 | orchestrator | Tuesday 13 January 2026 00:43:55 +0000 (0:00:01.241) 0:00:01.241 ******* 2026-01-13 00:45:17.037430 | orchestrator | ok: [testbed-manager] => { 2026-01-13 00:45:17.037448 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-13 00:45:17.037455 | orchestrator | } 2026-01-13 00:45:17.037459 | orchestrator | 2026-01-13 00:45:17.037463 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-13 00:45:17.037467 | orchestrator | Tuesday 13 January 2026 00:43:55 +0000 (0:00:00.241) 0:00:01.482 ******* 2026-01-13 00:45:17.037471 | orchestrator | ok: [testbed-manager] 2026-01-13 00:45:17.037476 | orchestrator | 2026-01-13 00:45:17.037496 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-13 00:45:17.037501 | orchestrator | Tuesday 13 January 2026 00:43:56 +0000 (0:00:01.068) 0:00:02.551 ******* 2026-01-13 00:45:17.037505 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-13 00:45:17.037509 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-13 00:45:17.037513 | orchestrator | 2026-01-13 00:45:17.037516 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-13 00:45:17.037520 | orchestrator | Tuesday 13 January 2026 00:43:58 +0000 (0:00:01.720) 0:00:04.272 ******* 2026-01-13 00:45:17.037524 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:17.037528 | orchestrator | 2026-01-13 00:45:17.037532 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-13 00:45:17.037545 | orchestrator | Tuesday 13 January 2026 00:44:01 +0000 (0:00:02.720) 0:00:06.992 ******* 2026-01-13 00:45:17.037549 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:17.037553 | orchestrator | 2026-01-13 00:45:17.037557 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-13 00:45:17.037560 | orchestrator | Tuesday 13 January 2026 00:44:03 +0000 (0:00:01.731) 0:00:08.723 ******* 2026-01-13 00:45:17.037564 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-13 00:45:17.037568 | orchestrator | ok: [testbed-manager] 2026-01-13 00:45:17.037572 | orchestrator | 2026-01-13 00:45:17.037575 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-13 00:45:17.037579 | orchestrator | Tuesday 13 January 2026 00:44:31 +0000 (0:00:28.445) 0:00:37.169 ******* 2026-01-13 00:45:17.037583 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:17.037587 | orchestrator | 2026-01-13 00:45:17.037590 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:45:17.037594 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:45:17.037600 | orchestrator | 2026-01-13 00:45:17.037604 | orchestrator | 2026-01-13 00:45:17.037607 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:45:17.037611 | orchestrator | Tuesday 13 January 2026 00:44:35 +0000 (0:00:04.266) 0:00:41.435 ******* 2026-01-13 00:45:17.037615 | orchestrator | =============================================================================== 2026-01-13 00:45:17.037637 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 28.45s 2026-01-13 00:45:17.037642 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.27s 2026-01-13 00:45:17.037646 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.72s 2026-01-13 00:45:17.037650 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.73s 2026-01-13 00:45:17.037654 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.72s 2026-01-13 00:45:17.037657 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.07s 2026-01-13 00:45:17.037661 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.24s 2026-01-13 00:45:17.037665 | orchestrator | 2026-01-13 00:45:17.037669 | orchestrator | 2026-01-13 00:45:17.037672 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-13 00:45:17.037676 | orchestrator | 2026-01-13 00:45:17.037680 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-13 00:45:17.037683 | orchestrator | Tuesday 13 January 2026 00:43:56 +0000 (0:00:00.581) 0:00:00.581 ******* 2026-01-13 00:45:17.037687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-13 00:45:17.037692 | orchestrator | 2026-01-13 00:45:17.037696 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-13 00:45:17.037700 | orchestrator | Tuesday 13 January 2026 00:43:56 +0000 (0:00:00.359) 0:00:00.940 ******* 2026-01-13 00:45:17.037708 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-13 00:45:17.037712 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-13 00:45:17.037716 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-13 00:45:17.037719 | orchestrator | 2026-01-13 00:45:17.037723 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-13 00:45:17.037727 | orchestrator | Tuesday 13 January 2026 00:43:59 +0000 (0:00:02.825) 0:00:03.766 ******* 2026-01-13 00:45:17.037731 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:17.037747 | orchestrator | 2026-01-13 00:45:17.037751 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-13 00:45:17.037755 | orchestrator | Tuesday 13 January 2026 00:44:02 +0000 (0:00:03.162) 0:00:06.929 ******* 2026-01-13 00:45:17.037765 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-13 00:45:17.037769 | orchestrator | ok: [testbed-manager] 2026-01-13 00:45:17.037773 | orchestrator | 2026-01-13 00:45:17.037777 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-13 00:45:17.037781 | orchestrator | Tuesday 13 January 2026 00:44:36 +0000 (0:00:34.142) 0:00:41.071 ******* 2026-01-13 00:45:17.037784 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:17.037788 | orchestrator | 2026-01-13 00:45:17.037792 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-13 00:45:17.037796 | orchestrator | Tuesday 13 January 2026 00:44:38 +0000 (0:00:01.743) 0:00:42.815 ******* 2026-01-13 00:45:17.037799 | orchestrator | ok: [testbed-manager] 2026-01-13 00:45:17.037803 | orchestrator | 2026-01-13 00:45:17.037807 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-13 00:45:17.037811 | orchestrator | Tuesday 13 January 2026 00:44:39 +0000 (0:00:01.001) 0:00:43.816 ******* 2026-01-13 00:45:17.037815 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:17.037818 | orchestrator | 2026-01-13 00:45:17.037822 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-13 00:45:17.037826 | orchestrator | Tuesday 13 January 2026 00:44:42 +0000 (0:00:02.612) 0:00:46.429 ******* 2026-01-13 00:45:17.037830 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:17.037833 | orchestrator | 2026-01-13 00:45:17.037837 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-13 00:45:17.037841 | orchestrator | Tuesday 13 January 2026 00:44:44 +0000 (0:00:02.059) 0:00:48.488 ******* 2026-01-13 00:45:17.037845 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:17.037848 | orchestrator | 2026-01-13 00:45:17.037852 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-13 00:45:17.037856 | orchestrator | Tuesday 13 January 2026 00:44:44 +0000 (0:00:00.648) 0:00:49.137 ******* 2026-01-13 00:45:17.037863 | orchestrator | ok: [testbed-manager] 2026-01-13 00:45:17.037867 | orchestrator | 2026-01-13 00:45:17.037871 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:45:17.037875 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:45:17.037879 | orchestrator | 2026-01-13 00:45:17.037882 | orchestrator | 2026-01-13 00:45:17.037886 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:45:17.037890 | orchestrator | Tuesday 13 January 2026 00:44:45 +0000 (0:00:00.413) 0:00:49.550 ******* 2026-01-13 00:45:17.037894 | orchestrator | =============================================================================== 2026-01-13 00:45:17.037897 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.14s 2026-01-13 00:45:17.037901 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.16s 2026-01-13 00:45:17.037905 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.83s 2026-01-13 00:45:17.037909 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.61s 2026-01-13 00:45:17.037916 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.06s 2026-01-13 00:45:17.037920 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.74s 2026-01-13 00:45:17.037923 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.00s 2026-01-13 00:45:17.037927 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.65s 2026-01-13 00:45:17.037931 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.41s 2026-01-13 00:45:17.037935 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.36s 2026-01-13 00:45:17.037939 | orchestrator | 2026-01-13 00:45:17.037942 | orchestrator | 2026-01-13 00:45:17.037946 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-13 00:45:17.037950 | orchestrator | 2026-01-13 00:45:17.037954 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-13 00:45:17.037957 | orchestrator | Tuesday 13 January 2026 00:44:12 +0000 (0:00:00.354) 0:00:00.354 ******* 2026-01-13 00:45:17.037961 | orchestrator | ok: [testbed-manager] 2026-01-13 00:45:17.037965 | orchestrator | 2026-01-13 00:45:17.037969 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-13 00:45:17.037972 | orchestrator | Tuesday 13 January 2026 00:44:14 +0000 (0:00:01.132) 0:00:01.487 ******* 2026-01-13 00:45:17.037976 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-13 00:45:17.037980 | orchestrator | 2026-01-13 00:45:17.037984 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-13 00:45:17.037988 | orchestrator | Tuesday 13 January 2026 00:44:14 +0000 (0:00:00.602) 0:00:02.089 ******* 2026-01-13 00:45:17.037992 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:17.037996 | orchestrator | 2026-01-13 00:45:17.038001 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-13 00:45:17.038005 | orchestrator | Tuesday 13 January 2026 00:44:16 +0000 (0:00:01.619) 0:00:03.708 ******* 2026-01-13 00:45:17.038009 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-13 00:45:17.038047 | orchestrator | ok: [testbed-manager] 2026-01-13 00:45:17.038054 | orchestrator | 2026-01-13 00:45:17.038061 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-13 00:45:17.038068 | orchestrator | Tuesday 13 January 2026 00:45:09 +0000 (0:00:53.611) 0:00:57.320 ******* 2026-01-13 00:45:17.038074 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:17.038080 | orchestrator | 2026-01-13 00:45:17.038086 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:45:17.038092 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:45:17.038098 | orchestrator | 2026-01-13 00:45:17.038102 | orchestrator | 2026-01-13 00:45:17.038107 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:45:17.038117 | orchestrator | Tuesday 13 January 2026 00:45:15 +0000 (0:00:05.575) 0:01:02.895 ******* 2026-01-13 00:45:17.038128 | orchestrator | =============================================================================== 2026-01-13 00:45:17.038196 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 53.61s 2026-01-13 00:45:17.038202 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 5.58s 2026-01-13 00:45:17.038208 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.62s 2026-01-13 00:45:17.038214 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.13s 2026-01-13 00:45:17.038220 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.60s 2026-01-13 00:45:17.038229 | orchestrator | 2026-01-13 00:45:17 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:17.039595 | orchestrator | 2026-01-13 00:45:17 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:17.043656 | orchestrator | 2026-01-13 00:45:17 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:45:17.043687 | orchestrator | 2026-01-13 00:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:20.099388 | orchestrator | 2026-01-13 00:45:20 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:20.099470 | orchestrator | 2026-01-13 00:45:20 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:20.099477 | orchestrator | 2026-01-13 00:45:20 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:20.099482 | orchestrator | 2026-01-13 00:45:20 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:45:20.099486 | orchestrator | 2026-01-13 00:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:23.145750 | orchestrator | 2026-01-13 00:45:23 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:23.146913 | orchestrator | 2026-01-13 00:45:23 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:23.148807 | orchestrator | 2026-01-13 00:45:23 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:23.149288 | orchestrator | 2026-01-13 00:45:23 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state STARTED 2026-01-13 00:45:23.149325 | orchestrator | 2026-01-13 00:45:23 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:26.183960 | orchestrator | 2026-01-13 00:45:26 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:26.185416 | orchestrator | 2026-01-13 00:45:26 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:26.185462 | orchestrator | 2026-01-13 00:45:26 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:26.185780 | orchestrator | 2026-01-13 00:45:26 | INFO  | Task 1049fb69-c628-41aa-be00-1b600139bf4b is in state SUCCESS 2026-01-13 00:45:26.185802 | orchestrator | 2026-01-13 00:45:26 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:26.187298 | orchestrator | 2026-01-13 00:45:26.187326 | orchestrator | 2026-01-13 00:45:26.187335 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 00:45:26.187342 | orchestrator | 2026-01-13 00:45:26.187349 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 00:45:26.187356 | orchestrator | Tuesday 13 January 2026 00:43:55 +0000 (0:00:00.305) 0:00:00.305 ******* 2026-01-13 00:45:26.187363 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-13 00:45:26.187369 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-13 00:45:26.187376 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-13 00:45:26.187383 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-13 00:45:26.187389 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-13 00:45:26.187396 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-13 00:45:26.187403 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-13 00:45:26.187410 | orchestrator | 2026-01-13 00:45:26.187416 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-13 00:45:26.187423 | orchestrator | 2026-01-13 00:45:26.187430 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-13 00:45:26.187436 | orchestrator | Tuesday 13 January 2026 00:43:56 +0000 (0:00:01.322) 0:00:01.628 ******* 2026-01-13 00:45:26.187479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:45:26.187504 | orchestrator | 2026-01-13 00:45:26.187510 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-13 00:45:26.187516 | orchestrator | Tuesday 13 January 2026 00:43:59 +0000 (0:00:03.084) 0:00:04.713 ******* 2026-01-13 00:45:26.187522 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:45:26.187529 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:45:26.187535 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:45:26.187542 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:45:26.187548 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:45:26.187555 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:45:26.187561 | orchestrator | ok: [testbed-manager] 2026-01-13 00:45:26.187567 | orchestrator | 2026-01-13 00:45:26.187574 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-13 00:45:26.187580 | orchestrator | Tuesday 13 January 2026 00:44:02 +0000 (0:00:02.505) 0:00:07.218 ******* 2026-01-13 00:45:26.187586 | orchestrator | ok: [testbed-manager] 2026-01-13 00:45:26.187592 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:45:26.187599 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:45:26.187605 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:45:26.187611 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:45:26.187642 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:45:26.187648 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:45:26.187654 | orchestrator | 2026-01-13 00:45:26.187661 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-13 00:45:26.187667 | orchestrator | Tuesday 13 January 2026 00:44:05 +0000 (0:00:03.685) 0:00:10.904 ******* 2026-01-13 00:45:26.187674 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:45:26.187680 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:45:26.187686 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:45:26.187693 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:45:26.187699 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:45:26.187705 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:45:26.187712 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:26.187718 | orchestrator | 2026-01-13 00:45:26.187725 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-13 00:45:26.187731 | orchestrator | Tuesday 13 January 2026 00:44:08 +0000 (0:00:02.058) 0:00:12.963 ******* 2026-01-13 00:45:26.187737 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:45:26.187747 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:45:26.187751 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:45:26.187754 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:45:26.187758 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:45:26.187762 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:45:26.187765 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:26.187769 | orchestrator | 2026-01-13 00:45:26.187773 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-13 00:45:26.187777 | orchestrator | Tuesday 13 January 2026 00:44:20 +0000 (0:00:12.305) 0:00:25.269 ******* 2026-01-13 00:45:26.187780 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:45:26.187784 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:45:26.187788 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:45:26.187791 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:45:26.187795 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:45:26.187798 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:45:26.187802 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:26.187806 | orchestrator | 2026-01-13 00:45:26.187809 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-13 00:45:26.187813 | orchestrator | Tuesday 13 January 2026 00:45:04 +0000 (0:00:43.787) 0:01:09.057 ******* 2026-01-13 00:45:26.187817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:45:26.187836 | orchestrator | 2026-01-13 00:45:26.187840 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-13 00:45:26.187844 | orchestrator | Tuesday 13 January 2026 00:45:05 +0000 (0:00:01.388) 0:01:10.445 ******* 2026-01-13 00:45:26.187848 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-13 00:45:26.187852 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-13 00:45:26.187856 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-13 00:45:26.187859 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-13 00:45:26.187871 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-13 00:45:26.187875 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-13 00:45:26.187879 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-13 00:45:26.187883 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-13 00:45:26.187886 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-13 00:45:26.187890 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-13 00:45:26.187894 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-13 00:45:26.187898 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-13 00:45:26.187902 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-13 00:45:26.187906 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-13 00:45:26.187911 | orchestrator | 2026-01-13 00:45:26.187915 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-13 00:45:26.187920 | orchestrator | Tuesday 13 January 2026 00:45:10 +0000 (0:00:04.818) 0:01:15.263 ******* 2026-01-13 00:45:26.187924 | orchestrator | ok: [testbed-manager] 2026-01-13 00:45:26.187929 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:45:26.187933 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:45:26.187937 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:45:26.187943 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:45:26.187950 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:45:26.187958 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:45:26.187965 | orchestrator | 2026-01-13 00:45:26.187972 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-13 00:45:26.187979 | orchestrator | Tuesday 13 January 2026 00:45:11 +0000 (0:00:00.968) 0:01:16.232 ******* 2026-01-13 00:45:26.187986 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:45:26.187990 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:26.187995 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:45:26.187999 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:45:26.188003 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:45:26.188007 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:45:26.188011 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:45:26.188015 | orchestrator | 2026-01-13 00:45:26.188020 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-13 00:45:26.188024 | orchestrator | Tuesday 13 January 2026 00:45:12 +0000 (0:00:01.259) 0:01:17.491 ******* 2026-01-13 00:45:26.188028 | orchestrator | ok: [testbed-manager] 2026-01-13 00:45:26.188033 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:45:26.188039 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:45:26.188045 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:45:26.188053 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:45:26.188058 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:45:26.188062 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:45:26.188066 | orchestrator | 2026-01-13 00:45:26.188071 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-13 00:45:26.188075 | orchestrator | Tuesday 13 January 2026 00:45:13 +0000 (0:00:01.343) 0:01:18.835 ******* 2026-01-13 00:45:26.188079 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:45:26.188083 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:45:26.188087 | orchestrator | ok: [testbed-manager] 2026-01-13 00:45:26.188102 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:45:26.188106 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:45:26.188111 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:45:26.188115 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:45:26.188119 | orchestrator | 2026-01-13 00:45:26.188123 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-13 00:45:26.188127 | orchestrator | Tuesday 13 January 2026 00:45:16 +0000 (0:00:02.361) 0:01:21.197 ******* 2026-01-13 00:45:26.188132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-13 00:45:26.188139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:45:26.188144 | orchestrator | 2026-01-13 00:45:26.188153 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-13 00:45:26.188157 | orchestrator | Tuesday 13 January 2026 00:45:18 +0000 (0:00:01.823) 0:01:23.020 ******* 2026-01-13 00:45:26.188162 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:26.188166 | orchestrator | 2026-01-13 00:45:26.188170 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-13 00:45:26.188174 | orchestrator | Tuesday 13 January 2026 00:45:19 +0000 (0:00:01.897) 0:01:24.918 ******* 2026-01-13 00:45:26.188178 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:45:26.188183 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:45:26.188187 | orchestrator | changed: [testbed-manager] 2026-01-13 00:45:26.188191 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:45:26.188195 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:45:26.188199 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:45:26.188204 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:45:26.188208 | orchestrator | 2026-01-13 00:45:26.188212 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:45:26.188216 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:45:26.188221 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:45:26.188225 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:45:26.188230 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:45:26.188237 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:45:26.188242 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:45:26.188246 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:45:26.188250 | orchestrator | 2026-01-13 00:45:26.188255 | orchestrator | 2026-01-13 00:45:26.188259 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:45:26.188263 | orchestrator | Tuesday 13 January 2026 00:45:23 +0000 (0:00:03.431) 0:01:28.349 ******* 2026-01-13 00:45:26.188269 | orchestrator | =============================================================================== 2026-01-13 00:45:26.188276 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 43.79s 2026-01-13 00:45:26.188283 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.31s 2026-01-13 00:45:26.188290 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.82s 2026-01-13 00:45:26.188296 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.69s 2026-01-13 00:45:26.188308 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.43s 2026-01-13 00:45:26.188314 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.08s 2026-01-13 00:45:26.188321 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.51s 2026-01-13 00:45:26.188327 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.36s 2026-01-13 00:45:26.188334 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.06s 2026-01-13 00:45:26.188341 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.90s 2026-01-13 00:45:26.188347 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.82s 2026-01-13 00:45:26.188354 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.39s 2026-01-13 00:45:26.188360 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.34s 2026-01-13 00:45:26.188366 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.32s 2026-01-13 00:45:26.188373 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.26s 2026-01-13 00:45:26.188380 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 0.97s 2026-01-13 00:45:29.248484 | orchestrator | 2026-01-13 00:45:29 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:29.253509 | orchestrator | 2026-01-13 00:45:29 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:29.254649 | orchestrator | 2026-01-13 00:45:29 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:29.254683 | orchestrator | 2026-01-13 00:45:29 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:32.304807 | orchestrator | 2026-01-13 00:45:32 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:32.306087 | orchestrator | 2026-01-13 00:45:32 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:32.308748 | orchestrator | 2026-01-13 00:45:32 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:32.308785 | orchestrator | 2026-01-13 00:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:35.360401 | orchestrator | 2026-01-13 00:45:35 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:35.364258 | orchestrator | 2026-01-13 00:45:35 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:35.366311 | orchestrator | 2026-01-13 00:45:35 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:35.366358 | orchestrator | 2026-01-13 00:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:38.423505 | orchestrator | 2026-01-13 00:45:38 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:38.425296 | orchestrator | 2026-01-13 00:45:38 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:38.428383 | orchestrator | 2026-01-13 00:45:38 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:38.428965 | orchestrator | 2026-01-13 00:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:41.481661 | orchestrator | 2026-01-13 00:45:41 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:41.486558 | orchestrator | 2026-01-13 00:45:41 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:41.490054 | orchestrator | 2026-01-13 00:45:41 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:41.490112 | orchestrator | 2026-01-13 00:45:41 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:44.595355 | orchestrator | 2026-01-13 00:45:44 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:44.600353 | orchestrator | 2026-01-13 00:45:44 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:44.602701 | orchestrator | 2026-01-13 00:45:44 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:44.602789 | orchestrator | 2026-01-13 00:45:44 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:47.663476 | orchestrator | 2026-01-13 00:45:47 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:47.664952 | orchestrator | 2026-01-13 00:45:47 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:47.666253 | orchestrator | 2026-01-13 00:45:47 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:47.666280 | orchestrator | 2026-01-13 00:45:47 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:50.716381 | orchestrator | 2026-01-13 00:45:50 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:50.716482 | orchestrator | 2026-01-13 00:45:50 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:50.716506 | orchestrator | 2026-01-13 00:45:50 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:50.716514 | orchestrator | 2026-01-13 00:45:50 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:53.772253 | orchestrator | 2026-01-13 00:45:53 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:53.773021 | orchestrator | 2026-01-13 00:45:53 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:53.775418 | orchestrator | 2026-01-13 00:45:53 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:53.775460 | orchestrator | 2026-01-13 00:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:56.823104 | orchestrator | 2026-01-13 00:45:56 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:56.823705 | orchestrator | 2026-01-13 00:45:56 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:56.824854 | orchestrator | 2026-01-13 00:45:56 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:56.824895 | orchestrator | 2026-01-13 00:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:45:59.864352 | orchestrator | 2026-01-13 00:45:59 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:45:59.866821 | orchestrator | 2026-01-13 00:45:59 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:45:59.868508 | orchestrator | 2026-01-13 00:45:59 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:45:59.868686 | orchestrator | 2026-01-13 00:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:02.922201 | orchestrator | 2026-01-13 00:46:02 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:02.922648 | orchestrator | 2026-01-13 00:46:02 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state STARTED 2026-01-13 00:46:02.924161 | orchestrator | 2026-01-13 00:46:02 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:02.924205 | orchestrator | 2026-01-13 00:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:05.962684 | orchestrator | 2026-01-13 00:46:05 | INFO  | Task b9424a02-7da9-4373-8629-d432bfa169ab is in state STARTED 2026-01-13 00:46:05.964142 | orchestrator | 2026-01-13 00:46:05 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:05.965531 | orchestrator | 2026-01-13 00:46:05 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:05.969069 | orchestrator | 2026-01-13 00:46:05 | INFO  | Task 6ab78807-a77c-43f5-9000-4296dc591a4d is in state SUCCESS 2026-01-13 00:46:05.971824 | orchestrator | 2026-01-13 00:46:05.971863 | orchestrator | 2026-01-13 00:46:05.971872 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-13 00:46:05.971881 | orchestrator | 2026-01-13 00:46:05.971889 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-13 00:46:05.971898 | orchestrator | Tuesday 13 January 2026 00:43:47 +0000 (0:00:00.249) 0:00:00.249 ******* 2026-01-13 00:46:05.971907 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:46:05.971916 | orchestrator | 2026-01-13 00:46:05.971924 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-13 00:46:05.971932 | orchestrator | Tuesday 13 January 2026 00:43:49 +0000 (0:00:01.242) 0:00:01.491 ******* 2026-01-13 00:46:05.971940 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-13 00:46:05.971948 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-13 00:46:05.971956 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-13 00:46:05.971963 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-13 00:46:05.971971 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-13 00:46:05.971979 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-13 00:46:05.971987 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-13 00:46:05.971995 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-13 00:46:05.972003 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-13 00:46:05.972010 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-13 00:46:05.972018 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-13 00:46:05.972026 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-13 00:46:05.972034 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-13 00:46:05.972042 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-13 00:46:05.972050 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-13 00:46:05.972058 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-13 00:46:05.972066 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-13 00:46:05.972074 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-13 00:46:05.972082 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-13 00:46:05.972089 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-13 00:46:05.972097 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-13 00:46:05.972105 | orchestrator | 2026-01-13 00:46:05.972113 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-13 00:46:05.972137 | orchestrator | Tuesday 13 January 2026 00:43:53 +0000 (0:00:03.835) 0:00:05.326 ******* 2026-01-13 00:46:05.972146 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:46:05.972155 | orchestrator | 2026-01-13 00:46:05.972163 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-13 00:46:05.972175 | orchestrator | Tuesday 13 January 2026 00:43:54 +0000 (0:00:01.122) 0:00:06.449 ******* 2026-01-13 00:46:05.972187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.972199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.972224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.972233 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.972242 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.972250 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.972258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.972278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.972287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.972308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.972316 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.972325 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.972333 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.972346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.972355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.972373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.972386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.972400 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.972411 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.972420 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.972430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.972439 | orchestrator | 2026-01-13 00:46:05.972448 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-13 00:46:05.972457 | orchestrator | Tuesday 13 January 2026 00:43:58 +0000 (0:00:04.707) 0:00:11.157 ******* 2026-01-13 00:46:05.972471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-13 00:46:05.972481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972505 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:46:05.972516 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-13 00:46:05.972532 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972543 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972552 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:46:05.972561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-13 00:46:05.972571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972656 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:46:05.972667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-13 00:46:05.972681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-13 00:46:05.972722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-13 00:46:05.972758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972778 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:46:05.972786 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:46:05.972794 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:46:05.972802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-13 00:46:05.972815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972832 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:46:05.972840 | orchestrator | 2026-01-13 00:46:05.972848 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-13 00:46:05.972856 | orchestrator | Tuesday 13 January 2026 00:44:00 +0000 (0:00:01.538) 0:00:12.696 ******* 2026-01-13 00:46:05.972869 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-13 00:46:05.972878 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972886 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972895 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:46:05.972903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-13 00:46:05.972914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-13 00:46:05.972948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.972969 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:46:05.972977 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:46:05.972985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-13 00:46:05.972993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.973005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.973013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-13 00:46:05.973026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.973034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.973047 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:46:05.973055 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:46:05.973063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-13 00:46:05.973071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.973079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.973087 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:46:05.973095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-13 00:46:05.973109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.973117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.973125 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:46:05.973133 | orchestrator | 2026-01-13 00:46:05.973141 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-13 00:46:05.973149 | orchestrator | Tuesday 13 January 2026 00:44:03 +0000 (0:00:03.058) 0:00:15.754 ******* 2026-01-13 00:46:05.973157 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:46:05.973169 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:46:05.973177 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:46:05.973185 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:46:05.973192 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:46:05.973204 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:46:05.973212 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:46:05.973220 | orchestrator | 2026-01-13 00:46:05.973233 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-13 00:46:05.973245 | orchestrator | Tuesday 13 January 2026 00:44:04 +0000 (0:00:00.896) 0:00:16.650 ******* 2026-01-13 00:46:05.973258 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:46:05.973271 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:46:05.973283 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:46:05.973295 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:46:05.973309 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:46:05.973323 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:46:05.973336 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:46:05.973350 | orchestrator | 2026-01-13 00:46:05.973363 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-13 00:46:05.973377 | orchestrator | Tuesday 13 January 2026 00:44:05 +0000 (0:00:01.115) 0:00:17.765 ******* 2026-01-13 00:46:05.973390 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.973403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.973417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.973432 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.973475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.973492 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.973506 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.973515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.973524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.973537 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.973546 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.973557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.973570 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.973583 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.973639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.973649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.973657 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.973665 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.973674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.973682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.973699 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.973712 | orchestrator | 2026-01-13 00:46:05.973727 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-13 00:46:05.973740 | orchestrator | Tuesday 13 January 2026 00:44:11 +0000 (0:00:06.384) 0:00:24.149 ******* 2026-01-13 00:46:05.973755 | orchestrator | [WARNING]: Skipped 2026-01-13 00:46:05.973770 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-13 00:46:05.973785 | orchestrator | to this access issue: 2026-01-13 00:46:05.973793 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-13 00:46:05.973801 | orchestrator | directory 2026-01-13 00:46:05.973809 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 00:46:05.973817 | orchestrator | 2026-01-13 00:46:05.973825 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-13 00:46:05.973833 | orchestrator | Tuesday 13 January 2026 00:44:14 +0000 (0:00:02.343) 0:00:26.493 ******* 2026-01-13 00:46:05.973840 | orchestrator | [WARNING]: Skipped 2026-01-13 00:46:05.973848 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-13 00:46:05.973861 | orchestrator | to this access issue: 2026-01-13 00:46:05.973870 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-13 00:46:05.973877 | orchestrator | directory 2026-01-13 00:46:05.973885 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 00:46:05.973893 | orchestrator | 2026-01-13 00:46:05.973901 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-13 00:46:05.973908 | orchestrator | Tuesday 13 January 2026 00:44:15 +0000 (0:00:01.110) 0:00:27.603 ******* 2026-01-13 00:46:05.973916 | orchestrator | [WARNING]: Skipped 2026-01-13 00:46:05.973924 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-13 00:46:05.973932 | orchestrator | to this access issue: 2026-01-13 00:46:05.973939 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-13 00:46:05.973947 | orchestrator | directory 2026-01-13 00:46:05.973955 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 00:46:05.973963 | orchestrator | 2026-01-13 00:46:05.973970 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-13 00:46:05.973978 | orchestrator | Tuesday 13 January 2026 00:44:16 +0000 (0:00:00.987) 0:00:28.591 ******* 2026-01-13 00:46:05.973986 | orchestrator | [WARNING]: Skipped 2026-01-13 00:46:05.973994 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-13 00:46:05.974001 | orchestrator | to this access issue: 2026-01-13 00:46:05.974009 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-13 00:46:05.974155 | orchestrator | directory 2026-01-13 00:46:05.974165 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 00:46:05.974173 | orchestrator | 2026-01-13 00:46:05.974181 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-13 00:46:05.974189 | orchestrator | Tuesday 13 January 2026 00:44:17 +0000 (0:00:00.872) 0:00:29.464 ******* 2026-01-13 00:46:05.974196 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:46:05.974204 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:46:05.974212 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:46:05.974292 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:46:05.974302 | orchestrator | changed: [testbed-manager] 2026-01-13 00:46:05.974310 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:46:05.974325 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:46:05.974333 | orchestrator | 2026-01-13 00:46:05.974341 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-13 00:46:05.974349 | orchestrator | Tuesday 13 January 2026 00:44:20 +0000 (0:00:03.315) 0:00:32.779 ******* 2026-01-13 00:46:05.974357 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-13 00:46:05.974365 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-13 00:46:05.974373 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-13 00:46:05.974381 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-13 00:46:05.974389 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-13 00:46:05.974397 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-13 00:46:05.974404 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-13 00:46:05.974412 | orchestrator | 2026-01-13 00:46:05.974420 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-13 00:46:05.974428 | orchestrator | Tuesday 13 January 2026 00:44:23 +0000 (0:00:03.119) 0:00:35.898 ******* 2026-01-13 00:46:05.974435 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:46:05.974443 | orchestrator | changed: [testbed-manager] 2026-01-13 00:46:05.974451 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:46:05.974458 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:46:05.974466 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:46:05.974474 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:46:05.974481 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:46:05.974489 | orchestrator | 2026-01-13 00:46:05.974497 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-13 00:46:05.974510 | orchestrator | Tuesday 13 January 2026 00:44:27 +0000 (0:00:03.634) 0:00:39.533 ******* 2026-01-13 00:46:05.974518 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.974533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.974542 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.974551 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.974565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.974573 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.974581 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.974623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.974634 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.974649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.974657 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.974671 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.974679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.974687 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.974695 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.974707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.974715 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.974729 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.974737 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.974750 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:46:05.974758 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.974766 | orchestrator | 2026-01-13 00:46:05.974775 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-13 00:46:05.974783 | orchestrator | Tuesday 13 January 2026 00:44:31 +0000 (0:00:04.745) 0:00:44.278 ******* 2026-01-13 00:46:05.974790 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-13 00:46:05.974798 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-13 00:46:05.974806 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-13 00:46:05.974814 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-13 00:46:05.974822 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-13 00:46:05.974829 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-13 00:46:05.974837 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-13 00:46:05.974845 | orchestrator | 2026-01-13 00:46:05.974853 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-13 00:46:05.974861 | orchestrator | Tuesday 13 January 2026 00:44:34 +0000 (0:00:02.433) 0:00:46.711 ******* 2026-01-13 00:46:05.974868 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-13 00:46:05.974876 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-13 00:46:05.974884 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-13 00:46:05.974892 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-13 00:46:05.974906 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-13 00:46:05.974914 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-13 00:46:05.974922 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-13 00:46:05.974930 | orchestrator | 2026-01-13 00:46:05.974940 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-13 00:46:05.975004 | orchestrator | Tuesday 13 January 2026 00:44:37 +0000 (0:00:02.608) 0:00:49.320 ******* 2026-01-13 00:46:05.975014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.975111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.975124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.975132 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.975140 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.975149 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.975157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-13 00:46:05.975170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.975189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.975198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.975207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.975215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.975223 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.975231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.975243 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.975256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.975269 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.975278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.975286 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.975295 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.975303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:46:05.975311 | orchestrator | 2026-01-13 00:46:05.975319 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-13 00:46:05.975327 | orchestrator | Tuesday 13 January 2026 00:44:40 +0000 (0:00:03.331) 0:00:52.651 ******* 2026-01-13 00:46:05.975335 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:46:05.975342 | orchestrator | changed: [testbed-manager] 2026-01-13 00:46:05.975350 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:46:05.975358 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:46:05.975366 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:46:05.975374 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:46:05.975381 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:46:05.975389 | orchestrator | 2026-01-13 00:46:05.975397 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-13 00:46:05.975405 | orchestrator | Tuesday 13 January 2026 00:44:42 +0000 (0:00:02.177) 0:00:54.829 ******* 2026-01-13 00:46:05.975412 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:46:05.975424 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:46:05.975432 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:46:05.975439 | orchestrator | changed: [testbed-manager] 2026-01-13 00:46:05.975447 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:46:05.975455 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:46:05.975463 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:46:05.975470 | orchestrator | 2026-01-13 00:46:05.975478 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-13 00:46:05.975486 | orchestrator | Tuesday 13 January 2026 00:44:44 +0000 (0:00:01.932) 0:00:56.761 ******* 2026-01-13 00:46:05.975494 | orchestrator | 2026-01-13 00:46:05.975502 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-13 00:46:05.975510 | orchestrator | Tuesday 13 January 2026 00:44:44 +0000 (0:00:00.077) 0:00:56.838 ******* 2026-01-13 00:46:05.975517 | orchestrator | 2026-01-13 00:46:05.975525 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-13 00:46:05.975533 | orchestrator | Tuesday 13 January 2026 00:44:44 +0000 (0:00:00.080) 0:00:56.919 ******* 2026-01-13 00:46:05.975541 | orchestrator | 2026-01-13 00:46:05.975548 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-13 00:46:05.975560 | orchestrator | Tuesday 13 January 2026 00:44:45 +0000 (0:00:00.413) 0:00:57.332 ******* 2026-01-13 00:46:05.975568 | orchestrator | 2026-01-13 00:46:05.975576 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-13 00:46:05.975584 | orchestrator | Tuesday 13 January 2026 00:44:45 +0000 (0:00:00.128) 0:00:57.461 ******* 2026-01-13 00:46:05.975646 | orchestrator | 2026-01-13 00:46:05.975656 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-13 00:46:05.975664 | orchestrator | Tuesday 13 January 2026 00:44:45 +0000 (0:00:00.121) 0:00:57.583 ******* 2026-01-13 00:46:05.975671 | orchestrator | 2026-01-13 00:46:05.975679 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-13 00:46:05.975687 | orchestrator | Tuesday 13 January 2026 00:44:45 +0000 (0:00:00.078) 0:00:57.661 ******* 2026-01-13 00:46:05.975695 | orchestrator | 2026-01-13 00:46:05.975702 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-13 00:46:05.975712 | orchestrator | Tuesday 13 January 2026 00:44:45 +0000 (0:00:00.097) 0:00:57.759 ******* 2026-01-13 00:46:05.975719 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:46:05.975726 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:46:05.975732 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:46:05.975739 | orchestrator | changed: [testbed-manager] 2026-01-13 00:46:05.975746 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:46:05.975754 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:46:05.975761 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:46:05.975768 | orchestrator | 2026-01-13 00:46:05.975776 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-13 00:46:05.975783 | orchestrator | Tuesday 13 January 2026 00:45:16 +0000 (0:00:30.903) 0:01:28.662 ******* 2026-01-13 00:46:05.975791 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:46:05.975799 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:46:05.975807 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:46:05.975814 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:46:05.975821 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:46:05.975828 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:46:05.975836 | orchestrator | changed: [testbed-manager] 2026-01-13 00:46:05.975843 | orchestrator | 2026-01-13 00:46:05.975851 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-13 00:46:05.975859 | orchestrator | Tuesday 13 January 2026 00:45:50 +0000 (0:00:34.589) 0:02:03.252 ******* 2026-01-13 00:46:05.975866 | orchestrator | ok: [testbed-manager] 2026-01-13 00:46:05.975874 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:46:05.975881 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:46:05.975889 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:46:05.975902 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:46:05.975910 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:46:05.975918 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:46:05.975925 | orchestrator | 2026-01-13 00:46:05.975933 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-13 00:46:05.975941 | orchestrator | Tuesday 13 January 2026 00:45:53 +0000 (0:00:02.078) 0:02:05.331 ******* 2026-01-13 00:46:05.975948 | orchestrator | changed: [testbed-manager] 2026-01-13 00:46:05.975955 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:46:05.975963 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:46:05.975970 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:46:05.975978 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:46:05.975985 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:46:05.975993 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:46:05.976000 | orchestrator | 2026-01-13 00:46:05.976008 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:46:05.976017 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-13 00:46:05.976025 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-13 00:46:05.976032 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-13 00:46:05.976041 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-13 00:46:05.976049 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-13 00:46:05.976056 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-13 00:46:05.976064 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-13 00:46:05.976071 | orchestrator | 2026-01-13 00:46:05.976079 | orchestrator | 2026-01-13 00:46:05.976086 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:46:05.976094 | orchestrator | Tuesday 13 January 2026 00:46:02 +0000 (0:00:09.755) 0:02:15.086 ******* 2026-01-13 00:46:05.976101 | orchestrator | =============================================================================== 2026-01-13 00:46:05.976112 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.59s 2026-01-13 00:46:05.976120 | orchestrator | common : Restart fluentd container ------------------------------------- 30.90s 2026-01-13 00:46:05.976128 | orchestrator | common : Restart cron container ----------------------------------------- 9.76s 2026-01-13 00:46:05.976135 | orchestrator | common : Copying over config.json files for services -------------------- 6.38s 2026-01-13 00:46:05.976143 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.75s 2026-01-13 00:46:05.976150 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.71s 2026-01-13 00:46:05.976157 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.84s 2026-01-13 00:46:05.976163 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.63s 2026-01-13 00:46:05.976170 | orchestrator | common : Check common containers ---------------------------------------- 3.33s 2026-01-13 00:46:05.976176 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.32s 2026-01-13 00:46:05.976183 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.12s 2026-01-13 00:46:05.976190 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.06s 2026-01-13 00:46:05.976196 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.61s 2026-01-13 00:46:05.976208 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.43s 2026-01-13 00:46:05.976218 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.34s 2026-01-13 00:46:05.976225 | orchestrator | common : Creating log volume -------------------------------------------- 2.18s 2026-01-13 00:46:05.976232 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.08s 2026-01-13 00:46:05.976238 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.93s 2026-01-13 00:46:05.976245 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.54s 2026-01-13 00:46:05.976252 | orchestrator | common : include_tasks -------------------------------------------------- 1.24s 2026-01-13 00:46:05.976258 | orchestrator | 2026-01-13 00:46:05 | INFO  | Task 45ec02fd-74ee-40be-8c20-5bba8185c997 is in state STARTED 2026-01-13 00:46:05.976265 | orchestrator | 2026-01-13 00:46:05 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:05.976272 | orchestrator | 2026-01-13 00:46:05 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:05.976279 | orchestrator | 2026-01-13 00:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:09.011462 | orchestrator | 2026-01-13 00:46:09 | INFO  | Task b9424a02-7da9-4373-8629-d432bfa169ab is in state STARTED 2026-01-13 00:46:09.011547 | orchestrator | 2026-01-13 00:46:09 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:09.011555 | orchestrator | 2026-01-13 00:46:09 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:09.011559 | orchestrator | 2026-01-13 00:46:09 | INFO  | Task 45ec02fd-74ee-40be-8c20-5bba8185c997 is in state STARTED 2026-01-13 00:46:09.012048 | orchestrator | 2026-01-13 00:46:09 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:09.013163 | orchestrator | 2026-01-13 00:46:09 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:09.013214 | orchestrator | 2026-01-13 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:12.047128 | orchestrator | 2026-01-13 00:46:12 | INFO  | Task b9424a02-7da9-4373-8629-d432bfa169ab is in state STARTED 2026-01-13 00:46:12.047620 | orchestrator | 2026-01-13 00:46:12 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:12.049684 | orchestrator | 2026-01-13 00:46:12 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:12.051157 | orchestrator | 2026-01-13 00:46:12 | INFO  | Task 45ec02fd-74ee-40be-8c20-5bba8185c997 is in state STARTED 2026-01-13 00:46:12.052530 | orchestrator | 2026-01-13 00:46:12 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:12.053890 | orchestrator | 2026-01-13 00:46:12 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:12.053940 | orchestrator | 2026-01-13 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:15.085703 | orchestrator | 2026-01-13 00:46:15 | INFO  | Task b9424a02-7da9-4373-8629-d432bfa169ab is in state STARTED 2026-01-13 00:46:15.087131 | orchestrator | 2026-01-13 00:46:15 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:15.087756 | orchestrator | 2026-01-13 00:46:15 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:15.089126 | orchestrator | 2026-01-13 00:46:15 | INFO  | Task 45ec02fd-74ee-40be-8c20-5bba8185c997 is in state STARTED 2026-01-13 00:46:15.090398 | orchestrator | 2026-01-13 00:46:15 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:15.092474 | orchestrator | 2026-01-13 00:46:15 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:15.092529 | orchestrator | 2026-01-13 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:18.137007 | orchestrator | 2026-01-13 00:46:18 | INFO  | Task b9424a02-7da9-4373-8629-d432bfa169ab is in state STARTED 2026-01-13 00:46:18.137834 | orchestrator | 2026-01-13 00:46:18 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:18.139878 | orchestrator | 2026-01-13 00:46:18 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:18.139912 | orchestrator | 2026-01-13 00:46:18 | INFO  | Task 45ec02fd-74ee-40be-8c20-5bba8185c997 is in state STARTED 2026-01-13 00:46:18.139917 | orchestrator | 2026-01-13 00:46:18 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:18.141036 | orchestrator | 2026-01-13 00:46:18 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:18.141061 | orchestrator | 2026-01-13 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:21.178780 | orchestrator | 2026-01-13 00:46:21 | INFO  | Task b9424a02-7da9-4373-8629-d432bfa169ab is in state STARTED 2026-01-13 00:46:21.178927 | orchestrator | 2026-01-13 00:46:21 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:21.179809 | orchestrator | 2026-01-13 00:46:21 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:21.180800 | orchestrator | 2026-01-13 00:46:21 | INFO  | Task 45ec02fd-74ee-40be-8c20-5bba8185c997 is in state STARTED 2026-01-13 00:46:21.181860 | orchestrator | 2026-01-13 00:46:21 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:21.182833 | orchestrator | 2026-01-13 00:46:21 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:21.183691 | orchestrator | 2026-01-13 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:24.222624 | orchestrator | 2026-01-13 00:46:24 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:46:24.223163 | orchestrator | 2026-01-13 00:46:24 | INFO  | Task b9424a02-7da9-4373-8629-d432bfa169ab is in state STARTED 2026-01-13 00:46:24.223967 | orchestrator | 2026-01-13 00:46:24 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:24.225399 | orchestrator | 2026-01-13 00:46:24 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:24.225879 | orchestrator | 2026-01-13 00:46:24 | INFO  | Task 45ec02fd-74ee-40be-8c20-5bba8185c997 is in state SUCCESS 2026-01-13 00:46:24.227494 | orchestrator | 2026-01-13 00:46:24 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:24.228305 | orchestrator | 2026-01-13 00:46:24 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:24.228321 | orchestrator | 2026-01-13 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:27.264399 | orchestrator | 2026-01-13 00:46:27 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:46:27.268530 | orchestrator | 2026-01-13 00:46:27 | INFO  | Task b9424a02-7da9-4373-8629-d432bfa169ab is in state STARTED 2026-01-13 00:46:27.268672 | orchestrator | 2026-01-13 00:46:27 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:27.268685 | orchestrator | 2026-01-13 00:46:27 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:27.268837 | orchestrator | 2026-01-13 00:46:27 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:27.268917 | orchestrator | 2026-01-13 00:46:27 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:27.268928 | orchestrator | 2026-01-13 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:30.302638 | orchestrator | 2026-01-13 00:46:30 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:46:30.302733 | orchestrator | 2026-01-13 00:46:30 | INFO  | Task b9424a02-7da9-4373-8629-d432bfa169ab is in state STARTED 2026-01-13 00:46:30.302764 | orchestrator | 2026-01-13 00:46:30 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:30.303437 | orchestrator | 2026-01-13 00:46:30 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:30.304059 | orchestrator | 2026-01-13 00:46:30 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:30.304714 | orchestrator | 2026-01-13 00:46:30 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:30.304728 | orchestrator | 2026-01-13 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:33.341315 | orchestrator | 2026-01-13 00:46:33 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:46:33.341442 | orchestrator | 2026-01-13 00:46:33 | INFO  | Task b9424a02-7da9-4373-8629-d432bfa169ab is in state STARTED 2026-01-13 00:46:33.344140 | orchestrator | 2026-01-13 00:46:33 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:33.344192 | orchestrator | 2026-01-13 00:46:33 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:33.345522 | orchestrator | 2026-01-13 00:46:33 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:33.347276 | orchestrator | 2026-01-13 00:46:33 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:33.347309 | orchestrator | 2026-01-13 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:36.415900 | orchestrator | 2026-01-13 00:46:36 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:46:36.416029 | orchestrator | 2026-01-13 00:46:36 | INFO  | Task b9424a02-7da9-4373-8629-d432bfa169ab is in state STARTED 2026-01-13 00:46:36.416548 | orchestrator | 2026-01-13 00:46:36 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:36.417223 | orchestrator | 2026-01-13 00:46:36 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:36.418135 | orchestrator | 2026-01-13 00:46:36 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:36.418746 | orchestrator | 2026-01-13 00:46:36 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:36.419052 | orchestrator | 2026-01-13 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:39.556311 | orchestrator | 2026-01-13 00:46:39.556404 | orchestrator | 2026-01-13 00:46:39.556415 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 00:46:39.556422 | orchestrator | 2026-01-13 00:46:39.556428 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 00:46:39.556435 | orchestrator | Tuesday 13 January 2026 00:46:08 +0000 (0:00:00.227) 0:00:00.227 ******* 2026-01-13 00:46:39.556441 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:46:39.556449 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:46:39.556474 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:46:39.556480 | orchestrator | 2026-01-13 00:46:39.556486 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 00:46:39.556492 | orchestrator | Tuesday 13 January 2026 00:46:09 +0000 (0:00:00.270) 0:00:00.497 ******* 2026-01-13 00:46:39.556507 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-13 00:46:39.556514 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-13 00:46:39.556520 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-13 00:46:39.556526 | orchestrator | 2026-01-13 00:46:39.556532 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-13 00:46:39.556538 | orchestrator | 2026-01-13 00:46:39.556544 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-13 00:46:39.556550 | orchestrator | Tuesday 13 January 2026 00:46:09 +0000 (0:00:00.416) 0:00:00.914 ******* 2026-01-13 00:46:39.556556 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:46:39.556615 | orchestrator | 2026-01-13 00:46:39.556628 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-13 00:46:39.556648 | orchestrator | Tuesday 13 January 2026 00:46:10 +0000 (0:00:00.495) 0:00:01.409 ******* 2026-01-13 00:46:39.556658 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-13 00:46:39.556668 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-13 00:46:39.556675 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-13 00:46:39.556681 | orchestrator | 2026-01-13 00:46:39.556687 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-13 00:46:39.556695 | orchestrator | Tuesday 13 January 2026 00:46:10 +0000 (0:00:00.634) 0:00:02.043 ******* 2026-01-13 00:46:39.556704 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-13 00:46:39.556710 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-13 00:46:39.556716 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-13 00:46:39.556722 | orchestrator | 2026-01-13 00:46:39.556728 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-13 00:46:39.556733 | orchestrator | Tuesday 13 January 2026 00:46:12 +0000 (0:00:01.907) 0:00:03.951 ******* 2026-01-13 00:46:39.556739 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:46:39.556767 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:46:39.556773 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:46:39.556779 | orchestrator | 2026-01-13 00:46:39.556785 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-13 00:46:39.556791 | orchestrator | Tuesday 13 January 2026 00:46:14 +0000 (0:00:01.570) 0:00:05.521 ******* 2026-01-13 00:46:39.556796 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:46:39.556802 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:46:39.556808 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:46:39.556813 | orchestrator | 2026-01-13 00:46:39.556819 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:46:39.556825 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:46:39.556833 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:46:39.556839 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:46:39.556846 | orchestrator | 2026-01-13 00:46:39.556853 | orchestrator | 2026-01-13 00:46:39.556859 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:46:39.556865 | orchestrator | Tuesday 13 January 2026 00:46:22 +0000 (0:00:08.193) 0:00:13.715 ******* 2026-01-13 00:46:39.556872 | orchestrator | =============================================================================== 2026-01-13 00:46:39.556887 | orchestrator | memcached : Restart memcached container --------------------------------- 8.19s 2026-01-13 00:46:39.556897 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.91s 2026-01-13 00:46:39.556905 | orchestrator | memcached : Check memcached container ----------------------------------- 1.57s 2026-01-13 00:46:39.556912 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.63s 2026-01-13 00:46:39.556918 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.50s 2026-01-13 00:46:39.556925 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-01-13 00:46:39.556931 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-01-13 00:46:39.556937 | orchestrator | 2026-01-13 00:46:39.556944 | orchestrator | 2026-01-13 00:46:39.556950 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 00:46:39.556956 | orchestrator | 2026-01-13 00:46:39.556963 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 00:46:39.556969 | orchestrator | Tuesday 13 January 2026 00:46:08 +0000 (0:00:00.216) 0:00:00.217 ******* 2026-01-13 00:46:39.556976 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:46:39.556983 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:46:39.556989 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:46:39.556995 | orchestrator | 2026-01-13 00:46:39.557002 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 00:46:39.557028 | orchestrator | Tuesday 13 January 2026 00:46:08 +0000 (0:00:00.236) 0:00:00.453 ******* 2026-01-13 00:46:39.557035 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-13 00:46:39.557042 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-13 00:46:39.557048 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-13 00:46:39.557054 | orchestrator | 2026-01-13 00:46:39.557061 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-13 00:46:39.557068 | orchestrator | 2026-01-13 00:46:39.557075 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-13 00:46:39.557082 | orchestrator | Tuesday 13 January 2026 00:46:09 +0000 (0:00:00.396) 0:00:00.850 ******* 2026-01-13 00:46:39.557088 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:46:39.557095 | orchestrator | 2026-01-13 00:46:39.557101 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-13 00:46:39.557108 | orchestrator | Tuesday 13 January 2026 00:46:09 +0000 (0:00:00.463) 0:00:01.314 ******* 2026-01-13 00:46:39.557128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557195 | orchestrator | 2026-01-13 00:46:39.557202 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-13 00:46:39.557208 | orchestrator | Tuesday 13 January 2026 00:46:10 +0000 (0:00:01.204) 0:00:02.519 ******* 2026-01-13 00:46:39.557215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557269 | orchestrator | 2026-01-13 00:46:39.557275 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-13 00:46:39.557281 | orchestrator | Tuesday 13 January 2026 00:46:13 +0000 (0:00:02.515) 0:00:05.034 ******* 2026-01-13 00:46:39.557287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557332 | orchestrator | 2026-01-13 00:46:39.557343 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-13 00:46:39.557354 | orchestrator | Tuesday 13 January 2026 00:46:15 +0000 (0:00:02.372) 0:00:07.406 ******* 2026-01-13 00:46:39.557360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-13 00:46:39.557404 | orchestrator | 2026-01-13 00:46:39.557411 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-13 00:46:39.557420 | orchestrator | Tuesday 13 January 2026 00:46:17 +0000 (0:00:02.044) 0:00:09.451 ******* 2026-01-13 00:46:39.557429 | orchestrator | 2026-01-13 00:46:39.557440 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-13 00:46:39.557453 | orchestrator | Tuesday 13 January 2026 00:46:17 +0000 (0:00:00.109) 0:00:09.561 ******* 2026-01-13 00:46:39.557462 | orchestrator | 2026-01-13 00:46:39.557471 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-13 00:46:39.557481 | orchestrator | Tuesday 13 January 2026 00:46:18 +0000 (0:00:00.104) 0:00:09.666 ******* 2026-01-13 00:46:39.557491 | orchestrator | 2026-01-13 00:46:39.557501 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-13 00:46:39.557510 | orchestrator | Tuesday 13 January 2026 00:46:18 +0000 (0:00:00.090) 0:00:09.756 ******* 2026-01-13 00:46:39.557519 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:46:39.557528 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:46:39.557534 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:46:39.557539 | orchestrator | 2026-01-13 00:46:39.557545 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-13 00:46:39.557551 | orchestrator | Tuesday 13 January 2026 00:46:27 +0000 (0:00:09.003) 0:00:18.759 ******* 2026-01-13 00:46:39.557610 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:46:39.557617 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:46:39.557623 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:46:39.557629 | orchestrator | 2026-01-13 00:46:39.557634 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:46:39.557640 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:46:39.557646 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:46:39.557652 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:46:39.557658 | orchestrator | 2026-01-13 00:46:39.557664 | orchestrator | 2026-01-13 00:46:39.557669 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:46:39.557675 | orchestrator | Tuesday 13 January 2026 00:46:37 +0000 (0:00:09.987) 0:00:28.747 ******* 2026-01-13 00:46:39.557681 | orchestrator | =============================================================================== 2026-01-13 00:46:39.557687 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.99s 2026-01-13 00:46:39.557692 | orchestrator | redis : Restart redis container ----------------------------------------- 9.00s 2026-01-13 00:46:39.557698 | orchestrator | redis : Copying over default config.json files -------------------------- 2.52s 2026-01-13 00:46:39.557704 | orchestrator | redis : Copying over redis config files --------------------------------- 2.37s 2026-01-13 00:46:39.557709 | orchestrator | redis : Check redis containers ------------------------------------------ 2.05s 2026-01-13 00:46:39.557716 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.20s 2026-01-13 00:46:39.557721 | orchestrator | redis : include_tasks --------------------------------------------------- 0.46s 2026-01-13 00:46:39.557727 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-01-13 00:46:39.557733 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.30s 2026-01-13 00:46:39.557739 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.24s 2026-01-13 00:46:39.557746 | orchestrator | 2026-01-13 00:46:39 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:46:39.557752 | orchestrator | 2026-01-13 00:46:39 | INFO  | Task b9424a02-7da9-4373-8629-d432bfa169ab is in state SUCCESS 2026-01-13 00:46:39.557758 | orchestrator | 2026-01-13 00:46:39 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:39.557763 | orchestrator | 2026-01-13 00:46:39 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:39.557769 | orchestrator | 2026-01-13 00:46:39 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:39.557775 | orchestrator | 2026-01-13 00:46:39 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:39.557781 | orchestrator | 2026-01-13 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:42.483937 | orchestrator | 2026-01-13 00:46:42 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:46:42.484381 | orchestrator | 2026-01-13 00:46:42 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:42.484984 | orchestrator | 2026-01-13 00:46:42 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:42.485805 | orchestrator | 2026-01-13 00:46:42 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:42.486434 | orchestrator | 2026-01-13 00:46:42 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:42.486492 | orchestrator | 2026-01-13 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:45.524536 | orchestrator | 2026-01-13 00:46:45 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:46:45.524683 | orchestrator | 2026-01-13 00:46:45 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:45.524692 | orchestrator | 2026-01-13 00:46:45 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:45.524697 | orchestrator | 2026-01-13 00:46:45 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:45.524701 | orchestrator | 2026-01-13 00:46:45 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:45.524706 | orchestrator | 2026-01-13 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:48.562499 | orchestrator | 2026-01-13 00:46:48 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:46:48.562668 | orchestrator | 2026-01-13 00:46:48 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:48.562697 | orchestrator | 2026-01-13 00:46:48 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:48.564602 | orchestrator | 2026-01-13 00:46:48 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:48.564661 | orchestrator | 2026-01-13 00:46:48 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:48.564675 | orchestrator | 2026-01-13 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:51.648386 | orchestrator | 2026-01-13 00:46:51 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:46:51.648915 | orchestrator | 2026-01-13 00:46:51 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:51.650705 | orchestrator | 2026-01-13 00:46:51 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:51.651433 | orchestrator | 2026-01-13 00:46:51 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:51.652209 | orchestrator | 2026-01-13 00:46:51 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:51.652242 | orchestrator | 2026-01-13 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:54.696536 | orchestrator | 2026-01-13 00:46:54 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:46:54.697747 | orchestrator | 2026-01-13 00:46:54 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:54.699392 | orchestrator | 2026-01-13 00:46:54 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:54.701072 | orchestrator | 2026-01-13 00:46:54 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:54.702846 | orchestrator | 2026-01-13 00:46:54 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:54.702898 | orchestrator | 2026-01-13 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:46:57.771420 | orchestrator | 2026-01-13 00:46:57 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:46:57.773138 | orchestrator | 2026-01-13 00:46:57 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:46:57.774176 | orchestrator | 2026-01-13 00:46:57 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:46:57.775174 | orchestrator | 2026-01-13 00:46:57 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:46:57.778199 | orchestrator | 2026-01-13 00:46:57 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:46:57.778235 | orchestrator | 2026-01-13 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:00.823614 | orchestrator | 2026-01-13 00:47:00 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:00.824100 | orchestrator | 2026-01-13 00:47:00 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:00.825027 | orchestrator | 2026-01-13 00:47:00 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:00.827093 | orchestrator | 2026-01-13 00:47:00 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:00.828078 | orchestrator | 2026-01-13 00:47:00 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:47:00.828116 | orchestrator | 2026-01-13 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:03.861693 | orchestrator | 2026-01-13 00:47:03 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:03.861780 | orchestrator | 2026-01-13 00:47:03 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:03.862442 | orchestrator | 2026-01-13 00:47:03 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:03.863064 | orchestrator | 2026-01-13 00:47:03 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:03.864931 | orchestrator | 2026-01-13 00:47:03 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:47:03.865006 | orchestrator | 2026-01-13 00:47:03 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:06.895224 | orchestrator | 2026-01-13 00:47:06 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:06.895733 | orchestrator | 2026-01-13 00:47:06 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:06.897531 | orchestrator | 2026-01-13 00:47:06 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:06.898253 | orchestrator | 2026-01-13 00:47:06 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:06.899031 | orchestrator | 2026-01-13 00:47:06 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:47:06.900201 | orchestrator | 2026-01-13 00:47:06 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:09.940500 | orchestrator | 2026-01-13 00:47:09 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:09.942187 | orchestrator | 2026-01-13 00:47:09 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:09.942232 | orchestrator | 2026-01-13 00:47:09 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:09.943244 | orchestrator | 2026-01-13 00:47:09 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:09.944323 | orchestrator | 2026-01-13 00:47:09 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state STARTED 2026-01-13 00:47:09.944359 | orchestrator | 2026-01-13 00:47:09 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:12.977205 | orchestrator | 2026-01-13 00:47:12 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:12.977905 | orchestrator | 2026-01-13 00:47:12 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:12.978884 | orchestrator | 2026-01-13 00:47:12 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:12.979827 | orchestrator | 2026-01-13 00:47:12 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:12.984191 | orchestrator | 2026-01-13 00:47:12 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:12.985340 | orchestrator | 2026-01-13 00:47:12 | INFO  | Task 09713e6c-f6b7-47e5-9300-5209f889f05e is in state SUCCESS 2026-01-13 00:47:12.987273 | orchestrator | 2026-01-13 00:47:12 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:12.988582 | orchestrator | 2026-01-13 00:47:12.988600 | orchestrator | 2026-01-13 00:47:12.988604 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 00:47:12.988607 | orchestrator | 2026-01-13 00:47:12.988610 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 00:47:12.988614 | orchestrator | Tuesday 13 January 2026 00:46:08 +0000 (0:00:00.202) 0:00:00.202 ******* 2026-01-13 00:47:12.988617 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:47:12.988621 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:47:12.988624 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:47:12.988627 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:47:12.988630 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:47:12.988633 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:47:12.988636 | orchestrator | 2026-01-13 00:47:12.988639 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 00:47:12.988643 | orchestrator | Tuesday 13 January 2026 00:46:09 +0000 (0:00:00.690) 0:00:00.893 ******* 2026-01-13 00:47:12.988646 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-13 00:47:12.988650 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-13 00:47:12.988653 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-13 00:47:12.988656 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-13 00:47:12.988659 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-13 00:47:12.988662 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-13 00:47:12.988665 | orchestrator | 2026-01-13 00:47:12.988668 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-13 00:47:12.988671 | orchestrator | 2026-01-13 00:47:12.988674 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-13 00:47:12.988677 | orchestrator | Tuesday 13 January 2026 00:46:09 +0000 (0:00:00.567) 0:00:01.461 ******* 2026-01-13 00:47:12.988680 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:47:12.988684 | orchestrator | 2026-01-13 00:47:12.988687 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-13 00:47:12.988690 | orchestrator | Tuesday 13 January 2026 00:46:10 +0000 (0:00:01.174) 0:00:02.635 ******* 2026-01-13 00:47:12.988694 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-13 00:47:12.988697 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-13 00:47:12.988700 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-13 00:47:12.988703 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-13 00:47:12.988706 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-13 00:47:12.988709 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-13 00:47:12.988712 | orchestrator | 2026-01-13 00:47:12.988715 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-13 00:47:12.988718 | orchestrator | Tuesday 13 January 2026 00:46:12 +0000 (0:00:01.393) 0:00:04.029 ******* 2026-01-13 00:47:12.988729 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-13 00:47:12.988732 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-13 00:47:12.988735 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-13 00:47:12.988738 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-13 00:47:12.988741 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-13 00:47:12.988744 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-13 00:47:12.988747 | orchestrator | 2026-01-13 00:47:12.988750 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-13 00:47:12.988754 | orchestrator | Tuesday 13 January 2026 00:46:13 +0000 (0:00:01.466) 0:00:05.496 ******* 2026-01-13 00:47:12.988757 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-13 00:47:12.988760 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:47:12.988763 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-13 00:47:12.988766 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:47:12.988769 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-13 00:47:12.988772 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:47:12.988775 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-13 00:47:12.988778 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:47:12.988781 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-13 00:47:12.988784 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:47:12.988787 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-13 00:47:12.988793 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:47:12.988796 | orchestrator | 2026-01-13 00:47:12.988799 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-13 00:47:12.988802 | orchestrator | Tuesday 13 January 2026 00:46:14 +0000 (0:00:01.101) 0:00:06.597 ******* 2026-01-13 00:47:12.988805 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:47:12.988808 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:47:12.988811 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:47:12.988814 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:47:12.988817 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:47:12.988820 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:47:12.988823 | orchestrator | 2026-01-13 00:47:12.988826 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-13 00:47:12.988829 | orchestrator | Tuesday 13 January 2026 00:46:15 +0000 (0:00:00.770) 0:00:07.368 ******* 2026-01-13 00:47:12.988840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988867 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988871 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988879 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988888 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988892 | orchestrator | 2026-01-13 00:47:12.988895 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-13 00:47:12.988898 | orchestrator | Tuesday 13 January 2026 00:46:17 +0000 (0:00:01.851) 0:00:09.220 ******* 2026-01-13 00:47:12.988901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988939 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988943 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988955 | orchestrator | 2026-01-13 00:47:12.988958 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-13 00:47:12.988961 | orchestrator | Tuesday 13 January 2026 00:46:20 +0000 (0:00:02.935) 0:00:12.156 ******* 2026-01-13 00:47:12.988964 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:47:12.988967 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:47:12.988970 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:47:12.988973 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:47:12.988976 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:47:12.988979 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:47:12.988982 | orchestrator | 2026-01-13 00:47:12.988985 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-13 00:47:12.988988 | orchestrator | Tuesday 13 January 2026 00:46:21 +0000 (0:00:00.990) 0:00:13.146 ******* 2026-01-13 00:47:12.988991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.988998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.989002 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.989007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.989012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.989015 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.989019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.989022 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-13 00:47:12.989026 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.989035 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.989038 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-13 00:47:12.989041 | orchestrator | 2026-01-13 00:47:12.989044 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-13 00:47:12.989047 | orchestrator | Tuesday 13 January 2026 00:46:24 +0000 (0:00:02.915) 0:00:16.062 ******* 2026-01-13 00:47:12.989050 | orchestrator | 2026-01-13 00:47:12.989053 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-13 00:47:12.989057 | orchestrator | Tuesday 13 January 2026 00:46:24 +0000 (0:00:00.662) 0:00:16.724 ******* 2026-01-13 00:47:12.989060 | orchestrator | 2026-01-13 00:47:12.989063 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-13 00:47:12.989066 | orchestrator | Tuesday 13 January 2026 00:46:25 +0000 (0:00:00.266) 0:00:16.991 ******* 2026-01-13 00:47:12.989069 | orchestrator | 2026-01-13 00:47:12.989072 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-13 00:47:12.989075 | orchestrator | Tuesday 13 January 2026 00:46:25 +0000 (0:00:00.324) 0:00:17.315 ******* 2026-01-13 00:47:12.989078 | orchestrator | 2026-01-13 00:47:12.989081 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-13 00:47:12.989083 | orchestrator | Tuesday 13 January 2026 00:46:25 +0000 (0:00:00.356) 0:00:17.671 ******* 2026-01-13 00:47:12.989086 | orchestrator | 2026-01-13 00:47:12.989090 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-13 00:47:12.989093 | orchestrator | Tuesday 13 January 2026 00:46:26 +0000 (0:00:00.277) 0:00:17.948 ******* 2026-01-13 00:47:12.989096 | orchestrator | 2026-01-13 00:47:12.989099 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-13 00:47:12.989102 | orchestrator | Tuesday 13 January 2026 00:46:26 +0000 (0:00:00.278) 0:00:18.227 ******* 2026-01-13 00:47:12.989105 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:47:12.989108 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:47:12.989111 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:47:12.989114 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:47:12.989117 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:47:12.989120 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:47:12.989123 | orchestrator | 2026-01-13 00:47:12.989126 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-13 00:47:12.989129 | orchestrator | Tuesday 13 January 2026 00:46:37 +0000 (0:00:10.747) 0:00:28.974 ******* 2026-01-13 00:47:12.989132 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:47:12.989135 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:47:12.989138 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:47:12.989141 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:47:12.989146 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:47:12.989149 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:47:12.989152 | orchestrator | 2026-01-13 00:47:12.989155 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-13 00:47:12.989158 | orchestrator | Tuesday 13 January 2026 00:46:38 +0000 (0:00:01.130) 0:00:30.104 ******* 2026-01-13 00:47:12.989161 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:47:12.989164 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:47:12.989167 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:47:12.989170 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:47:12.989173 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:47:12.989176 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:47:12.989179 | orchestrator | 2026-01-13 00:47:12.989183 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-13 00:47:12.989186 | orchestrator | Tuesday 13 January 2026 00:46:47 +0000 (0:00:09.252) 0:00:39.357 ******* 2026-01-13 00:47:12.989189 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-13 00:47:12.989192 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-13 00:47:12.989195 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-13 00:47:12.989201 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-13 00:47:12.989206 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-13 00:47:12.989213 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-13 00:47:12.989219 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-13 00:47:12.989225 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-13 00:47:12.989229 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-13 00:47:12.989232 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-13 00:47:12.989235 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-13 00:47:12.989238 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-13 00:47:12.989241 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-13 00:47:12.989244 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-13 00:47:12.989247 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-13 00:47:12.989250 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-13 00:47:12.989253 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-13 00:47:12.989256 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-13 00:47:12.989259 | orchestrator | 2026-01-13 00:47:12.989262 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-13 00:47:12.989265 | orchestrator | Tuesday 13 January 2026 00:46:55 +0000 (0:00:07.869) 0:00:47.226 ******* 2026-01-13 00:47:12.989268 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-13 00:47:12.989271 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-13 00:47:12.989276 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:47:12.989279 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:47:12.989283 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-13 00:47:12.989286 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:47:12.989289 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-13 00:47:12.989292 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-13 00:47:12.989295 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-13 00:47:12.989298 | orchestrator | 2026-01-13 00:47:12.989301 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-13 00:47:12.989304 | orchestrator | Tuesday 13 January 2026 00:46:58 +0000 (0:00:03.046) 0:00:50.273 ******* 2026-01-13 00:47:12.989307 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-13 00:47:12.989310 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:47:12.989313 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-13 00:47:12.989316 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:47:12.989319 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-13 00:47:12.989322 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:47:12.989325 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-13 00:47:12.989328 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-13 00:47:12.989331 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-13 00:47:12.989334 | orchestrator | 2026-01-13 00:47:12.989337 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-13 00:47:12.989340 | orchestrator | Tuesday 13 January 2026 00:47:02 +0000 (0:00:03.928) 0:00:54.201 ******* 2026-01-13 00:47:12.989343 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:47:12.989346 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:47:12.989349 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:47:12.989352 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:47:12.989355 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:47:12.989358 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:47:12.989361 | orchestrator | 2026-01-13 00:47:12.989364 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:47:12.989367 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-13 00:47:12.989370 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-13 00:47:12.989376 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-13 00:47:12.989379 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-13 00:47:12.989383 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-13 00:47:12.989387 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-13 00:47:12.989391 | orchestrator | 2026-01-13 00:47:12.989394 | orchestrator | 2026-01-13 00:47:12.989397 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:47:12.989400 | orchestrator | Tuesday 13 January 2026 00:47:10 +0000 (0:00:07.944) 0:01:02.146 ******* 2026-01-13 00:47:12.989403 | orchestrator | =============================================================================== 2026-01-13 00:47:12.989406 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.20s 2026-01-13 00:47:12.989409 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.75s 2026-01-13 00:47:12.989414 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.87s 2026-01-13 00:47:12.989417 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.93s 2026-01-13 00:47:12.989420 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.05s 2026-01-13 00:47:12.989423 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.94s 2026-01-13 00:47:12.989426 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.92s 2026-01-13 00:47:12.989429 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.16s 2026-01-13 00:47:12.989432 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.85s 2026-01-13 00:47:12.989435 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.47s 2026-01-13 00:47:12.989438 | orchestrator | module-load : Load modules ---------------------------------------------- 1.39s 2026-01-13 00:47:12.989441 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.17s 2026-01-13 00:47:12.989444 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.13s 2026-01-13 00:47:12.989447 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.10s 2026-01-13 00:47:12.989450 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.99s 2026-01-13 00:47:12.989453 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.77s 2026-01-13 00:47:12.989456 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2026-01-13 00:47:12.989459 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-01-13 00:47:16.017178 | orchestrator | 2026-01-13 00:47:16 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:16.018347 | orchestrator | 2026-01-13 00:47:16 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:16.020397 | orchestrator | 2026-01-13 00:47:16 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:16.022503 | orchestrator | 2026-01-13 00:47:16 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:16.023200 | orchestrator | 2026-01-13 00:47:16 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:16.023364 | orchestrator | 2026-01-13 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:19.052777 | orchestrator | 2026-01-13 00:47:19 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:19.053125 | orchestrator | 2026-01-13 00:47:19 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:19.054061 | orchestrator | 2026-01-13 00:47:19 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:19.054709 | orchestrator | 2026-01-13 00:47:19 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:19.055359 | orchestrator | 2026-01-13 00:47:19 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:19.055429 | orchestrator | 2026-01-13 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:22.083201 | orchestrator | 2026-01-13 00:47:22 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:22.083781 | orchestrator | 2026-01-13 00:47:22 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:22.084799 | orchestrator | 2026-01-13 00:47:22 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:22.085665 | orchestrator | 2026-01-13 00:47:22 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:22.086481 | orchestrator | 2026-01-13 00:47:22 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:22.086516 | orchestrator | 2026-01-13 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:25.127904 | orchestrator | 2026-01-13 00:47:25 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:25.129336 | orchestrator | 2026-01-13 00:47:25 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:25.132872 | orchestrator | 2026-01-13 00:47:25 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:25.133933 | orchestrator | 2026-01-13 00:47:25 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:25.134950 | orchestrator | 2026-01-13 00:47:25 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:25.134990 | orchestrator | 2026-01-13 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:28.165353 | orchestrator | 2026-01-13 00:47:28 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:28.166698 | orchestrator | 2026-01-13 00:47:28 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:28.168665 | orchestrator | 2026-01-13 00:47:28 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:28.171085 | orchestrator | 2026-01-13 00:47:28 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:28.171784 | orchestrator | 2026-01-13 00:47:28 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:28.171820 | orchestrator | 2026-01-13 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:31.201107 | orchestrator | 2026-01-13 00:47:31 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:31.204536 | orchestrator | 2026-01-13 00:47:31 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:31.206282 | orchestrator | 2026-01-13 00:47:31 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:31.208812 | orchestrator | 2026-01-13 00:47:31 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:31.209998 | orchestrator | 2026-01-13 00:47:31 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:31.210329 | orchestrator | 2026-01-13 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:34.249360 | orchestrator | 2026-01-13 00:47:34 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:34.249448 | orchestrator | 2026-01-13 00:47:34 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:34.249748 | orchestrator | 2026-01-13 00:47:34 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:34.250796 | orchestrator | 2026-01-13 00:47:34 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:34.251883 | orchestrator | 2026-01-13 00:47:34 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:34.252852 | orchestrator | 2026-01-13 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:37.290473 | orchestrator | 2026-01-13 00:47:37 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:37.293679 | orchestrator | 2026-01-13 00:47:37 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:37.295179 | orchestrator | 2026-01-13 00:47:37 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:37.297272 | orchestrator | 2026-01-13 00:47:37 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:37.298757 | orchestrator | 2026-01-13 00:47:37 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:37.298790 | orchestrator | 2026-01-13 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:40.331169 | orchestrator | 2026-01-13 00:47:40 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:40.332672 | orchestrator | 2026-01-13 00:47:40 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:40.333445 | orchestrator | 2026-01-13 00:47:40 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:40.334287 | orchestrator | 2026-01-13 00:47:40 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:40.338460 | orchestrator | 2026-01-13 00:47:40 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:40.338520 | orchestrator | 2026-01-13 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:43.374953 | orchestrator | 2026-01-13 00:47:43 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:43.377689 | orchestrator | 2026-01-13 00:47:43 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:43.377743 | orchestrator | 2026-01-13 00:47:43 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:43.379074 | orchestrator | 2026-01-13 00:47:43 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:43.381232 | orchestrator | 2026-01-13 00:47:43 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:43.381270 | orchestrator | 2026-01-13 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:46.430628 | orchestrator | 2026-01-13 00:47:46 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:46.433991 | orchestrator | 2026-01-13 00:47:46 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:46.434057 | orchestrator | 2026-01-13 00:47:46 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:46.434238 | orchestrator | 2026-01-13 00:47:46 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:46.435228 | orchestrator | 2026-01-13 00:47:46 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:46.435422 | orchestrator | 2026-01-13 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:49.469090 | orchestrator | 2026-01-13 00:47:49 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:49.470995 | orchestrator | 2026-01-13 00:47:49 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:49.472547 | orchestrator | 2026-01-13 00:47:49 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:49.474730 | orchestrator | 2026-01-13 00:47:49 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:49.477066 | orchestrator | 2026-01-13 00:47:49 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:49.477218 | orchestrator | 2026-01-13 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:52.516998 | orchestrator | 2026-01-13 00:47:52 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:52.519464 | orchestrator | 2026-01-13 00:47:52 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:52.521313 | orchestrator | 2026-01-13 00:47:52 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:52.523827 | orchestrator | 2026-01-13 00:47:52 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:52.526218 | orchestrator | 2026-01-13 00:47:52 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:52.526592 | orchestrator | 2026-01-13 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:55.596707 | orchestrator | 2026-01-13 00:47:55 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:55.596752 | orchestrator | 2026-01-13 00:47:55 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:55.596756 | orchestrator | 2026-01-13 00:47:55 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:55.596760 | orchestrator | 2026-01-13 00:47:55 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:55.596763 | orchestrator | 2026-01-13 00:47:55 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:55.596774 | orchestrator | 2026-01-13 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:47:59.050403 | orchestrator | 2026-01-13 00:47:59 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:47:59.051034 | orchestrator | 2026-01-13 00:47:59 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:47:59.051757 | orchestrator | 2026-01-13 00:47:59 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:47:59.052502 | orchestrator | 2026-01-13 00:47:59 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:47:59.053228 | orchestrator | 2026-01-13 00:47:59 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:47:59.055953 | orchestrator | 2026-01-13 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:02.273152 | orchestrator | 2026-01-13 00:48:02 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:48:02.273236 | orchestrator | 2026-01-13 00:48:02 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:02.273253 | orchestrator | 2026-01-13 00:48:02 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:48:02.273268 | orchestrator | 2026-01-13 00:48:02 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:02.273281 | orchestrator | 2026-01-13 00:48:02 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:02.273295 | orchestrator | 2026-01-13 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:05.202650 | orchestrator | 2026-01-13 00:48:05 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:48:05.202743 | orchestrator | 2026-01-13 00:48:05 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:05.204802 | orchestrator | 2026-01-13 00:48:05 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:48:05.204910 | orchestrator | 2026-01-13 00:48:05 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:05.204923 | orchestrator | 2026-01-13 00:48:05 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:05.204960 | orchestrator | 2026-01-13 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:08.227576 | orchestrator | 2026-01-13 00:48:08 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:48:08.227772 | orchestrator | 2026-01-13 00:48:08 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:08.230742 | orchestrator | 2026-01-13 00:48:08 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state STARTED 2026-01-13 00:48:08.234714 | orchestrator | 2026-01-13 00:48:08 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:08.235195 | orchestrator | 2026-01-13 00:48:08 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:08.235335 | orchestrator | 2026-01-13 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:11.281237 | orchestrator | 2026-01-13 00:48:11 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:48:11.283508 | orchestrator | 2026-01-13 00:48:11 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:11.284746 | orchestrator | 2026-01-13 00:48:11 | INFO  | Task ae35fb1d-5bc3-476d-a48a-9d1c5939422c is in state SUCCESS 2026-01-13 00:48:11.285784 | orchestrator | 2026-01-13 00:48:11.285812 | orchestrator | 2026-01-13 00:48:11.285817 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-13 00:48:11.285822 | orchestrator | 2026-01-13 00:48:11.285825 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-13 00:48:11.285829 | orchestrator | Tuesday 13 January 2026 00:43:48 +0000 (0:00:00.170) 0:00:00.170 ******* 2026-01-13 00:48:11.285833 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:48:11.285837 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:48:11.285841 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:48:11.285844 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.285848 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.285851 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.285855 | orchestrator | 2026-01-13 00:48:11.285858 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-13 00:48:11.285862 | orchestrator | Tuesday 13 January 2026 00:43:48 +0000 (0:00:00.686) 0:00:00.857 ******* 2026-01-13 00:48:11.285865 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.285869 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.285873 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.285876 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.285880 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.285883 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.285886 | orchestrator | 2026-01-13 00:48:11.285897 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-13 00:48:11.285901 | orchestrator | Tuesday 13 January 2026 00:43:49 +0000 (0:00:00.508) 0:00:01.366 ******* 2026-01-13 00:48:11.285904 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.285908 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.285911 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.285915 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.285918 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.285921 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.285925 | orchestrator | 2026-01-13 00:48:11.285929 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-13 00:48:11.285932 | orchestrator | Tuesday 13 January 2026 00:43:49 +0000 (0:00:00.591) 0:00:01.957 ******* 2026-01-13 00:48:11.285936 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:48:11.285939 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.285943 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.285946 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:48:11.285974 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.285978 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:48:11.285982 | orchestrator | 2026-01-13 00:48:11.285985 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-13 00:48:11.285989 | orchestrator | Tuesday 13 January 2026 00:43:52 +0000 (0:00:02.678) 0:00:04.635 ******* 2026-01-13 00:48:11.285993 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:48:11.285996 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:48:11.286000 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.286003 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.286007 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.286010 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:48:11.286035 | orchestrator | 2026-01-13 00:48:11.286038 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-13 00:48:11.286041 | orchestrator | Tuesday 13 January 2026 00:43:54 +0000 (0:00:01.815) 0:00:06.451 ******* 2026-01-13 00:48:11.286044 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:48:11.286047 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.286050 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.286053 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:48:11.286056 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:48:11.286059 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.286070 | orchestrator | 2026-01-13 00:48:11.286073 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-13 00:48:11.286080 | orchestrator | Tuesday 13 January 2026 00:43:56 +0000 (0:00:01.962) 0:00:08.414 ******* 2026-01-13 00:48:11.286083 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.286087 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.286092 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.286097 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.286102 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.286107 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.286112 | orchestrator | 2026-01-13 00:48:11.286117 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-13 00:48:11.286123 | orchestrator | Tuesday 13 January 2026 00:43:57 +0000 (0:00:00.876) 0:00:09.290 ******* 2026-01-13 00:48:11.286128 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.286133 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.286223 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.286235 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.286256 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.286261 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.286265 | orchestrator | 2026-01-13 00:48:11.286270 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-13 00:48:11.286275 | orchestrator | Tuesday 13 January 2026 00:43:58 +0000 (0:00:00.800) 0:00:10.090 ******* 2026-01-13 00:48:11.286296 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-13 00:48:11.286302 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-13 00:48:11.286307 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.286312 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-13 00:48:11.286317 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-13 00:48:11.286322 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-13 00:48:11.286327 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-13 00:48:11.286332 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.286338 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.286343 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-13 00:48:11.286357 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-13 00:48:11.286370 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-13 00:48:11.286376 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-13 00:48:11.286381 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.286386 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.286391 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-13 00:48:11.286397 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-13 00:48:11.286402 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.286408 | orchestrator | 2026-01-13 00:48:11.286413 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-13 00:48:11.286418 | orchestrator | Tuesday 13 January 2026 00:43:58 +0000 (0:00:00.586) 0:00:10.677 ******* 2026-01-13 00:48:11.286423 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.286428 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.286433 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.286438 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.286441 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.286444 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.286447 | orchestrator | 2026-01-13 00:48:11.286453 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-13 00:48:11.286497 | orchestrator | Tuesday 13 January 2026 00:43:59 +0000 (0:00:01.141) 0:00:11.818 ******* 2026-01-13 00:48:11.286500 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:48:11.286503 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:48:11.286506 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:48:11.286510 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.286513 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.286516 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.286519 | orchestrator | 2026-01-13 00:48:11.286522 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-13 00:48:11.286525 | orchestrator | Tuesday 13 January 2026 00:44:00 +0000 (0:00:01.131) 0:00:12.950 ******* 2026-01-13 00:48:11.286528 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:48:11.286531 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.286535 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:48:11.286540 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.286545 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.286550 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:48:11.286555 | orchestrator | 2026-01-13 00:48:11.286560 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-13 00:48:11.286565 | orchestrator | Tuesday 13 January 2026 00:44:06 +0000 (0:00:05.994) 0:00:18.944 ******* 2026-01-13 00:48:11.286570 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.286575 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.286581 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.286586 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.286591 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.286596 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.286600 | orchestrator | 2026-01-13 00:48:11.286604 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-13 00:48:11.286607 | orchestrator | Tuesday 13 January 2026 00:44:08 +0000 (0:00:01.150) 0:00:20.095 ******* 2026-01-13 00:48:11.286610 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.286613 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.286616 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.286619 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.286621 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.286624 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.286627 | orchestrator | 2026-01-13 00:48:11.286631 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-13 00:48:11.286638 | orchestrator | Tuesday 13 January 2026 00:44:10 +0000 (0:00:02.158) 0:00:22.253 ******* 2026-01-13 00:48:11.286641 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.286644 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.286647 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.286650 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.286653 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.286656 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.286659 | orchestrator | 2026-01-13 00:48:11.286662 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-13 00:48:11.286665 | orchestrator | Tuesday 13 January 2026 00:44:11 +0000 (0:00:00.853) 0:00:23.106 ******* 2026-01-13 00:48:11.286668 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-13 00:48:11.286687 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-13 00:48:11.286690 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.286694 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-13 00:48:11.286697 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-13 00:48:11.286700 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.286703 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-13 00:48:11.286706 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-13 00:48:11.286709 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.286712 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-13 00:48:11.286715 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-13 00:48:11.286718 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.286721 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-13 00:48:11.286724 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-13 00:48:11.286727 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.286730 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-13 00:48:11.286733 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-13 00:48:11.286736 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.286739 | orchestrator | 2026-01-13 00:48:11.286742 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-13 00:48:11.286750 | orchestrator | Tuesday 13 January 2026 00:44:12 +0000 (0:00:01.221) 0:00:24.327 ******* 2026-01-13 00:48:11.286753 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.286756 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.286759 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.286766 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.286773 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.286776 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.286779 | orchestrator | 2026-01-13 00:48:11.286782 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-13 00:48:11.286785 | orchestrator | Tuesday 13 January 2026 00:44:13 +0000 (0:00:01.153) 0:00:25.481 ******* 2026-01-13 00:48:11.286788 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.286791 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.286818 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.286822 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.286825 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.286830 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.286835 | orchestrator | 2026-01-13 00:48:11.286841 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-13 00:48:11.286846 | orchestrator | 2026-01-13 00:48:11.286852 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-13 00:48:11.286861 | orchestrator | Tuesday 13 January 2026 00:44:15 +0000 (0:00:01.933) 0:00:27.414 ******* 2026-01-13 00:48:11.286867 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.286873 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.286879 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.286882 | orchestrator | 2026-01-13 00:48:11.286885 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-13 00:48:11.286889 | orchestrator | Tuesday 13 January 2026 00:44:16 +0000 (0:00:00.953) 0:00:28.367 ******* 2026-01-13 00:48:11.286892 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.286895 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.286898 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.286901 | orchestrator | 2026-01-13 00:48:11.286904 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-13 00:48:11.286907 | orchestrator | Tuesday 13 January 2026 00:44:17 +0000 (0:00:01.095) 0:00:29.463 ******* 2026-01-13 00:48:11.286910 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.286913 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.286916 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.286919 | orchestrator | 2026-01-13 00:48:11.286922 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-13 00:48:11.286925 | orchestrator | Tuesday 13 January 2026 00:44:18 +0000 (0:00:01.038) 0:00:30.502 ******* 2026-01-13 00:48:11.286928 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.286931 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.286934 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.286937 | orchestrator | 2026-01-13 00:48:11.286940 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-13 00:48:11.286943 | orchestrator | Tuesday 13 January 2026 00:44:19 +0000 (0:00:00.910) 0:00:31.413 ******* 2026-01-13 00:48:11.286946 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.286949 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.286952 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.286956 | orchestrator | 2026-01-13 00:48:11.286959 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-13 00:48:11.286962 | orchestrator | Tuesday 13 January 2026 00:44:19 +0000 (0:00:00.306) 0:00:31.719 ******* 2026-01-13 00:48:11.286965 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.286968 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.286972 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.286977 | orchestrator | 2026-01-13 00:48:11.286982 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-13 00:48:11.287006 | orchestrator | Tuesday 13 January 2026 00:44:20 +0000 (0:00:01.114) 0:00:32.833 ******* 2026-01-13 00:48:11.287012 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.287018 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.287023 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.287029 | orchestrator | 2026-01-13 00:48:11.287032 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-13 00:48:11.287035 | orchestrator | Tuesday 13 January 2026 00:44:22 +0000 (0:00:01.917) 0:00:34.751 ******* 2026-01-13 00:48:11.287038 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:48:11.287041 | orchestrator | 2026-01-13 00:48:11.287044 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-13 00:48:11.287047 | orchestrator | Tuesday 13 January 2026 00:44:23 +0000 (0:00:00.717) 0:00:35.469 ******* 2026-01-13 00:48:11.287050 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.287054 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.287057 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.287060 | orchestrator | 2026-01-13 00:48:11.287063 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-13 00:48:11.287066 | orchestrator | Tuesday 13 January 2026 00:44:26 +0000 (0:00:02.947) 0:00:38.417 ******* 2026-01-13 00:48:11.287069 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.287072 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.287075 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.287078 | orchestrator | 2026-01-13 00:48:11.287081 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-13 00:48:11.287087 | orchestrator | Tuesday 13 January 2026 00:44:26 +0000 (0:00:00.538) 0:00:38.955 ******* 2026-01-13 00:48:11.287091 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.287096 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.287101 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.287107 | orchestrator | 2026-01-13 00:48:11.287112 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-13 00:48:11.287117 | orchestrator | Tuesday 13 January 2026 00:44:28 +0000 (0:00:01.405) 0:00:40.361 ******* 2026-01-13 00:48:11.287122 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.287126 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.287130 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.287135 | orchestrator | 2026-01-13 00:48:11.287140 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-13 00:48:11.287151 | orchestrator | Tuesday 13 January 2026 00:44:30 +0000 (0:00:01.946) 0:00:42.308 ******* 2026-01-13 00:48:11.287157 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.287162 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.287167 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.287172 | orchestrator | 2026-01-13 00:48:11.287178 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-13 00:48:11.287181 | orchestrator | Tuesday 13 January 2026 00:44:30 +0000 (0:00:00.758) 0:00:43.066 ******* 2026-01-13 00:48:11.287184 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.287187 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.287190 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.287193 | orchestrator | 2026-01-13 00:48:11.287196 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-13 00:48:11.287199 | orchestrator | Tuesday 13 January 2026 00:44:31 +0000 (0:00:00.472) 0:00:43.539 ******* 2026-01-13 00:48:11.287202 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.287205 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.287208 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.287211 | orchestrator | 2026-01-13 00:48:11.287214 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-13 00:48:11.287220 | orchestrator | Tuesday 13 January 2026 00:44:33 +0000 (0:00:01.852) 0:00:45.391 ******* 2026-01-13 00:48:11.287223 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.287226 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.287229 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.287232 | orchestrator | 2026-01-13 00:48:11.287235 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-13 00:48:11.287238 | orchestrator | Tuesday 13 January 2026 00:44:36 +0000 (0:00:02.852) 0:00:48.244 ******* 2026-01-13 00:48:11.287241 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.287244 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.287247 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.287250 | orchestrator | 2026-01-13 00:48:11.287254 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-13 00:48:11.287257 | orchestrator | Tuesday 13 January 2026 00:44:36 +0000 (0:00:00.672) 0:00:48.916 ******* 2026-01-13 00:48:11.287260 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-13 00:48:11.287264 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-13 00:48:11.287267 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-13 00:48:11.287270 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-13 00:48:11.287273 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-13 00:48:11.287278 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-13 00:48:11.287281 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-13 00:48:11.287285 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-13 00:48:11.287288 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-13 00:48:11.287291 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-13 00:48:11.287294 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-13 00:48:11.287297 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-13 00:48:11.287300 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.287303 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.287306 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.287309 | orchestrator | 2026-01-13 00:48:11.287312 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-13 00:48:11.287315 | orchestrator | Tuesday 13 January 2026 00:45:20 +0000 (0:00:43.608) 0:01:32.525 ******* 2026-01-13 00:48:11.287318 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.287321 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.287324 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.287327 | orchestrator | 2026-01-13 00:48:11.287330 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-13 00:48:11.287333 | orchestrator | Tuesday 13 January 2026 00:45:20 +0000 (0:00:00.357) 0:01:32.883 ******* 2026-01-13 00:48:11.287336 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.287339 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.287342 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.287345 | orchestrator | 2026-01-13 00:48:11.287348 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-13 00:48:11.287351 | orchestrator | Tuesday 13 January 2026 00:45:21 +0000 (0:00:01.073) 0:01:33.957 ******* 2026-01-13 00:48:11.287354 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.287357 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.287360 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.287363 | orchestrator | 2026-01-13 00:48:11.287368 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-13 00:48:11.287371 | orchestrator | Tuesday 13 January 2026 00:45:23 +0000 (0:00:01.326) 0:01:35.284 ******* 2026-01-13 00:48:11.287374 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.287377 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.287380 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.287383 | orchestrator | 2026-01-13 00:48:11.287387 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-13 00:48:11.287390 | orchestrator | Tuesday 13 January 2026 00:45:48 +0000 (0:00:25.639) 0:02:00.923 ******* 2026-01-13 00:48:11.287393 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.287396 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.287399 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.287402 | orchestrator | 2026-01-13 00:48:11.287405 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-13 00:48:11.287408 | orchestrator | Tuesday 13 January 2026 00:45:49 +0000 (0:00:00.595) 0:02:01.519 ******* 2026-01-13 00:48:11.287411 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.287414 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.287417 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.287422 | orchestrator | 2026-01-13 00:48:11.287425 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-13 00:48:11.287430 | orchestrator | Tuesday 13 January 2026 00:45:50 +0000 (0:00:00.643) 0:02:02.162 ******* 2026-01-13 00:48:11.287433 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.287436 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.287439 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.287442 | orchestrator | 2026-01-13 00:48:11.287445 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-13 00:48:11.287448 | orchestrator | Tuesday 13 January 2026 00:45:50 +0000 (0:00:00.651) 0:02:02.813 ******* 2026-01-13 00:48:11.287451 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.287468 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.287474 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.287477 | orchestrator | 2026-01-13 00:48:11.287480 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-13 00:48:11.287483 | orchestrator | Tuesday 13 January 2026 00:45:51 +0000 (0:00:00.883) 0:02:03.697 ******* 2026-01-13 00:48:11.287486 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.287489 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.287492 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.287495 | orchestrator | 2026-01-13 00:48:11.287498 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-13 00:48:11.287501 | orchestrator | Tuesday 13 January 2026 00:45:51 +0000 (0:00:00.363) 0:02:04.060 ******* 2026-01-13 00:48:11.287504 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.287507 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.287510 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.287513 | orchestrator | 2026-01-13 00:48:11.287516 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-13 00:48:11.287519 | orchestrator | Tuesday 13 January 2026 00:45:52 +0000 (0:00:00.648) 0:02:04.708 ******* 2026-01-13 00:48:11.287522 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.287525 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.287528 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.287531 | orchestrator | 2026-01-13 00:48:11.287534 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-13 00:48:11.287537 | orchestrator | Tuesday 13 January 2026 00:45:53 +0000 (0:00:00.624) 0:02:05.333 ******* 2026-01-13 00:48:11.287540 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.287543 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.287546 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.287549 | orchestrator | 2026-01-13 00:48:11.287552 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-13 00:48:11.287555 | orchestrator | Tuesday 13 January 2026 00:45:54 +0000 (0:00:01.377) 0:02:06.711 ******* 2026-01-13 00:48:11.287558 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:11.287561 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:11.287564 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:11.287567 | orchestrator | 2026-01-13 00:48:11.287570 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-13 00:48:11.287573 | orchestrator | Tuesday 13 January 2026 00:45:55 +0000 (0:00:00.864) 0:02:07.576 ******* 2026-01-13 00:48:11.287576 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.287579 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.287582 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.287585 | orchestrator | 2026-01-13 00:48:11.287588 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-13 00:48:11.287591 | orchestrator | Tuesday 13 January 2026 00:45:55 +0000 (0:00:00.281) 0:02:07.857 ******* 2026-01-13 00:48:11.287594 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.287597 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.287600 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.287603 | orchestrator | 2026-01-13 00:48:11.287606 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-13 00:48:11.287611 | orchestrator | Tuesday 13 January 2026 00:45:56 +0000 (0:00:00.296) 0:02:08.153 ******* 2026-01-13 00:48:11.287614 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.287617 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.287620 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.287623 | orchestrator | 2026-01-13 00:48:11.287627 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-13 00:48:11.287630 | orchestrator | Tuesday 13 January 2026 00:45:56 +0000 (0:00:00.918) 0:02:09.072 ******* 2026-01-13 00:48:11.287633 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.287636 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.287639 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.287642 | orchestrator | 2026-01-13 00:48:11.287645 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-13 00:48:11.287648 | orchestrator | Tuesday 13 January 2026 00:45:57 +0000 (0:00:00.707) 0:02:09.779 ******* 2026-01-13 00:48:11.287651 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-13 00:48:11.287656 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-13 00:48:11.287659 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-13 00:48:11.287662 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-13 00:48:11.287665 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-13 00:48:11.287668 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-13 00:48:11.287672 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-13 00:48:11.287675 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-13 00:48:11.287678 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-13 00:48:11.287681 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-13 00:48:11.287688 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-13 00:48:11.287694 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-13 00:48:11.287698 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-13 00:48:11.287703 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-13 00:48:11.287708 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-13 00:48:11.287713 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-13 00:48:11.287718 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-13 00:48:11.287723 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-13 00:48:11.287728 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-13 00:48:11.287733 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-13 00:48:11.287738 | orchestrator | 2026-01-13 00:48:11.287744 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-13 00:48:11.287750 | orchestrator | 2026-01-13 00:48:11.287753 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-13 00:48:11.287756 | orchestrator | Tuesday 13 January 2026 00:46:00 +0000 (0:00:02.885) 0:02:12.665 ******* 2026-01-13 00:48:11.287759 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:48:11.287770 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:48:11.287774 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:48:11.287778 | orchestrator | 2026-01-13 00:48:11.287785 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-13 00:48:11.287792 | orchestrator | Tuesday 13 January 2026 00:46:01 +0000 (0:00:00.547) 0:02:13.213 ******* 2026-01-13 00:48:11.287797 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:48:11.287802 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:48:11.287807 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:48:11.287811 | orchestrator | 2026-01-13 00:48:11.287816 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-13 00:48:11.287821 | orchestrator | Tuesday 13 January 2026 00:46:01 +0000 (0:00:00.662) 0:02:13.875 ******* 2026-01-13 00:48:11.287828 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:48:11.287834 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:48:11.287838 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:48:11.287843 | orchestrator | 2026-01-13 00:48:11.287848 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-13 00:48:11.287852 | orchestrator | Tuesday 13 January 2026 00:46:02 +0000 (0:00:00.355) 0:02:14.231 ******* 2026-01-13 00:48:11.287858 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:48:11.287864 | orchestrator | 2026-01-13 00:48:11.287870 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-13 00:48:11.287876 | orchestrator | Tuesday 13 January 2026 00:46:02 +0000 (0:00:00.716) 0:02:14.947 ******* 2026-01-13 00:48:11.287882 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.287889 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.287894 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.287899 | orchestrator | 2026-01-13 00:48:11.287904 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-13 00:48:11.287909 | orchestrator | Tuesday 13 January 2026 00:46:03 +0000 (0:00:00.328) 0:02:15.276 ******* 2026-01-13 00:48:11.287914 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.287919 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.287924 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.287929 | orchestrator | 2026-01-13 00:48:11.287934 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-13 00:48:11.287941 | orchestrator | Tuesday 13 January 2026 00:46:03 +0000 (0:00:00.311) 0:02:15.587 ******* 2026-01-13 00:48:11.287944 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.287947 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.287950 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.287953 | orchestrator | 2026-01-13 00:48:11.287956 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-13 00:48:11.287959 | orchestrator | Tuesday 13 January 2026 00:46:03 +0000 (0:00:00.318) 0:02:15.906 ******* 2026-01-13 00:48:11.287962 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:48:11.287965 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:48:11.287968 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:48:11.287971 | orchestrator | 2026-01-13 00:48:11.287979 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-13 00:48:11.287982 | orchestrator | Tuesday 13 January 2026 00:46:04 +0000 (0:00:00.972) 0:02:16.878 ******* 2026-01-13 00:48:11.287985 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:48:11.287988 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:48:11.287991 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:48:11.287994 | orchestrator | 2026-01-13 00:48:11.287997 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-13 00:48:11.288000 | orchestrator | Tuesday 13 January 2026 00:46:05 +0000 (0:00:01.154) 0:02:18.032 ******* 2026-01-13 00:48:11.288003 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:48:11.288006 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:48:11.288009 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:48:11.288016 | orchestrator | 2026-01-13 00:48:11.288019 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-13 00:48:11.288022 | orchestrator | Tuesday 13 January 2026 00:46:07 +0000 (0:00:01.441) 0:02:19.474 ******* 2026-01-13 00:48:11.288025 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:48:11.288028 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:48:11.288031 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:48:11.288035 | orchestrator | 2026-01-13 00:48:11.288043 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-13 00:48:11.288049 | orchestrator | 2026-01-13 00:48:11.288054 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-13 00:48:11.288059 | orchestrator | Tuesday 13 January 2026 00:46:19 +0000 (0:00:11.785) 0:02:31.260 ******* 2026-01-13 00:48:11.288064 | orchestrator | ok: [testbed-manager] 2026-01-13 00:48:11.288069 | orchestrator | 2026-01-13 00:48:11.288074 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-13 00:48:11.288079 | orchestrator | Tuesday 13 January 2026 00:46:19 +0000 (0:00:00.772) 0:02:32.033 ******* 2026-01-13 00:48:11.288084 | orchestrator | changed: [testbed-manager] 2026-01-13 00:48:11.288089 | orchestrator | 2026-01-13 00:48:11.288094 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-13 00:48:11.288100 | orchestrator | Tuesday 13 January 2026 00:46:20 +0000 (0:00:00.517) 0:02:32.550 ******* 2026-01-13 00:48:11.288105 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-13 00:48:11.288111 | orchestrator | 2026-01-13 00:48:11.288126 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-13 00:48:11.288131 | orchestrator | Tuesday 13 January 2026 00:46:20 +0000 (0:00:00.526) 0:02:33.077 ******* 2026-01-13 00:48:11.288140 | orchestrator | changed: [testbed-manager] 2026-01-13 00:48:11.288146 | orchestrator | 2026-01-13 00:48:11.288151 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-13 00:48:11.288155 | orchestrator | Tuesday 13 January 2026 00:46:21 +0000 (0:00:00.947) 0:02:34.024 ******* 2026-01-13 00:48:11.288163 | orchestrator | changed: [testbed-manager] 2026-01-13 00:48:11.288168 | orchestrator | 2026-01-13 00:48:11.288172 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-13 00:48:11.288177 | orchestrator | Tuesday 13 January 2026 00:46:22 +0000 (0:00:00.868) 0:02:34.892 ******* 2026-01-13 00:48:11.288182 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-13 00:48:11.288187 | orchestrator | 2026-01-13 00:48:11.288191 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-13 00:48:11.288197 | orchestrator | Tuesday 13 January 2026 00:46:24 +0000 (0:00:01.855) 0:02:36.748 ******* 2026-01-13 00:48:11.288201 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-13 00:48:11.288206 | orchestrator | 2026-01-13 00:48:11.288210 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-13 00:48:11.288215 | orchestrator | Tuesday 13 January 2026 00:46:25 +0000 (0:00:00.915) 0:02:37.664 ******* 2026-01-13 00:48:11.288220 | orchestrator | changed: [testbed-manager] 2026-01-13 00:48:11.288225 | orchestrator | 2026-01-13 00:48:11.288230 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-13 00:48:11.288234 | orchestrator | Tuesday 13 January 2026 00:46:25 +0000 (0:00:00.357) 0:02:38.022 ******* 2026-01-13 00:48:11.288239 | orchestrator | changed: [testbed-manager] 2026-01-13 00:48:11.288244 | orchestrator | 2026-01-13 00:48:11.288251 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-13 00:48:11.288257 | orchestrator | 2026-01-13 00:48:11.288262 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-13 00:48:11.288267 | orchestrator | Tuesday 13 January 2026 00:46:26 +0000 (0:00:00.574) 0:02:38.596 ******* 2026-01-13 00:48:11.288272 | orchestrator | ok: [testbed-manager] 2026-01-13 00:48:11.288277 | orchestrator | 2026-01-13 00:48:11.288281 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-13 00:48:11.288286 | orchestrator | Tuesday 13 January 2026 00:46:26 +0000 (0:00:00.105) 0:02:38.702 ******* 2026-01-13 00:48:11.288295 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-13 00:48:11.288301 | orchestrator | 2026-01-13 00:48:11.288306 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-13 00:48:11.288311 | orchestrator | Tuesday 13 January 2026 00:46:26 +0000 (0:00:00.178) 0:02:38.880 ******* 2026-01-13 00:48:11.288316 | orchestrator | ok: [testbed-manager] 2026-01-13 00:48:11.288321 | orchestrator | 2026-01-13 00:48:11.288326 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-13 00:48:11.288331 | orchestrator | Tuesday 13 January 2026 00:46:27 +0000 (0:00:00.759) 0:02:39.639 ******* 2026-01-13 00:48:11.288335 | orchestrator | ok: [testbed-manager] 2026-01-13 00:48:11.288338 | orchestrator | 2026-01-13 00:48:11.288341 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-13 00:48:11.288344 | orchestrator | Tuesday 13 January 2026 00:46:28 +0000 (0:00:01.292) 0:02:40.932 ******* 2026-01-13 00:48:11.288347 | orchestrator | changed: [testbed-manager] 2026-01-13 00:48:11.288350 | orchestrator | 2026-01-13 00:48:11.288353 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-13 00:48:11.288356 | orchestrator | Tuesday 13 January 2026 00:46:29 +0000 (0:00:00.738) 0:02:41.670 ******* 2026-01-13 00:48:11.288359 | orchestrator | ok: [testbed-manager] 2026-01-13 00:48:11.288362 | orchestrator | 2026-01-13 00:48:11.288369 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-13 00:48:11.288373 | orchestrator | Tuesday 13 January 2026 00:46:29 +0000 (0:00:00.364) 0:02:42.035 ******* 2026-01-13 00:48:11.288376 | orchestrator | changed: [testbed-manager] 2026-01-13 00:48:11.288381 | orchestrator | 2026-01-13 00:48:11.288388 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-13 00:48:11.288394 | orchestrator | Tuesday 13 January 2026 00:46:36 +0000 (0:00:06.864) 0:02:48.899 ******* 2026-01-13 00:48:11.288399 | orchestrator | changed: [testbed-manager] 2026-01-13 00:48:11.288404 | orchestrator | 2026-01-13 00:48:11.288409 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-13 00:48:11.288413 | orchestrator | Tuesday 13 January 2026 00:46:48 +0000 (0:00:11.729) 0:03:00.628 ******* 2026-01-13 00:48:11.288420 | orchestrator | ok: [testbed-manager] 2026-01-13 00:48:11.288425 | orchestrator | 2026-01-13 00:48:11.288430 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-13 00:48:11.288434 | orchestrator | 2026-01-13 00:48:11.288439 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-13 00:48:11.288444 | orchestrator | Tuesday 13 January 2026 00:46:49 +0000 (0:00:00.458) 0:03:01.087 ******* 2026-01-13 00:48:11.288449 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.288467 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.288472 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.288478 | orchestrator | 2026-01-13 00:48:11.288484 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-13 00:48:11.288489 | orchestrator | Tuesday 13 January 2026 00:46:49 +0000 (0:00:00.298) 0:03:01.386 ******* 2026-01-13 00:48:11.288495 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.288499 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.288503 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.288508 | orchestrator | 2026-01-13 00:48:11.288513 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-13 00:48:11.288518 | orchestrator | Tuesday 13 January 2026 00:46:49 +0000 (0:00:00.250) 0:03:01.636 ******* 2026-01-13 00:48:11.288523 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:48:11.288527 | orchestrator | 2026-01-13 00:48:11.288532 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-13 00:48:11.288536 | orchestrator | Tuesday 13 January 2026 00:46:50 +0000 (0:00:00.607) 0:03:02.243 ******* 2026-01-13 00:48:11.288545 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-13 00:48:11.288550 | orchestrator | 2026-01-13 00:48:11.288555 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-13 00:48:11.288560 | orchestrator | Tuesday 13 January 2026 00:46:51 +0000 (0:00:00.997) 0:03:03.241 ******* 2026-01-13 00:48:11.288565 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 00:48:11.288570 | orchestrator | 2026-01-13 00:48:11.288575 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-13 00:48:11.288580 | orchestrator | Tuesday 13 January 2026 00:46:51 +0000 (0:00:00.782) 0:03:04.023 ******* 2026-01-13 00:48:11.288585 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.288590 | orchestrator | 2026-01-13 00:48:11.288596 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-13 00:48:11.288601 | orchestrator | Tuesday 13 January 2026 00:46:52 +0000 (0:00:00.101) 0:03:04.125 ******* 2026-01-13 00:48:11.288606 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 00:48:11.288611 | orchestrator | 2026-01-13 00:48:11.288616 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-13 00:48:11.288621 | orchestrator | Tuesday 13 January 2026 00:46:53 +0000 (0:00:00.985) 0:03:05.110 ******* 2026-01-13 00:48:11.288626 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.288631 | orchestrator | 2026-01-13 00:48:11.288636 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-13 00:48:11.288641 | orchestrator | Tuesday 13 January 2026 00:46:53 +0000 (0:00:00.118) 0:03:05.228 ******* 2026-01-13 00:48:11.288646 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.288650 | orchestrator | 2026-01-13 00:48:11.288655 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-13 00:48:11.288660 | orchestrator | Tuesday 13 January 2026 00:46:53 +0000 (0:00:00.116) 0:03:05.345 ******* 2026-01-13 00:48:11.288664 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.288670 | orchestrator | 2026-01-13 00:48:11.288674 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-13 00:48:11.288679 | orchestrator | Tuesday 13 January 2026 00:46:53 +0000 (0:00:00.113) 0:03:05.459 ******* 2026-01-13 00:48:11.288685 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.288690 | orchestrator | 2026-01-13 00:48:11.288694 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-13 00:48:11.288699 | orchestrator | Tuesday 13 January 2026 00:46:53 +0000 (0:00:00.129) 0:03:05.589 ******* 2026-01-13 00:48:11.288704 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-13 00:48:11.288709 | orchestrator | 2026-01-13 00:48:11.288714 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-13 00:48:11.288719 | orchestrator | Tuesday 13 January 2026 00:46:58 +0000 (0:00:05.335) 0:03:10.924 ******* 2026-01-13 00:48:11.288723 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-13 00:48:11.288728 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-13 00:48:11.288733 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-13 00:48:11.288737 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-13 00:48:11.288742 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-13 00:48:11.288747 | orchestrator | 2026-01-13 00:48:11.288751 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-13 00:48:11.288756 | orchestrator | Tuesday 13 January 2026 00:47:41 +0000 (0:00:42.203) 0:03:53.128 ******* 2026-01-13 00:48:11.288899 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 00:48:11.288910 | orchestrator | 2026-01-13 00:48:11.288915 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-13 00:48:11.288921 | orchestrator | Tuesday 13 January 2026 00:47:42 +0000 (0:00:01.205) 0:03:54.333 ******* 2026-01-13 00:48:11.288926 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-13 00:48:11.288937 | orchestrator | 2026-01-13 00:48:11.288942 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-13 00:48:11.288946 | orchestrator | Tuesday 13 January 2026 00:47:43 +0000 (0:00:01.557) 0:03:55.891 ******* 2026-01-13 00:48:11.288951 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-13 00:48:11.288956 | orchestrator | 2026-01-13 00:48:11.288961 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-13 00:48:11.288966 | orchestrator | Tuesday 13 January 2026 00:47:45 +0000 (0:00:01.209) 0:03:57.100 ******* 2026-01-13 00:48:11.288971 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.288974 | orchestrator | 2026-01-13 00:48:11.288977 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-13 00:48:11.288980 | orchestrator | Tuesday 13 January 2026 00:47:45 +0000 (0:00:00.101) 0:03:57.202 ******* 2026-01-13 00:48:11.289425 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-13 00:48:11.289448 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-13 00:48:11.289481 | orchestrator | 2026-01-13 00:48:11.289489 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-13 00:48:11.289497 | orchestrator | Tuesday 13 January 2026 00:47:46 +0000 (0:00:01.545) 0:03:58.747 ******* 2026-01-13 00:48:11.289502 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.289508 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.289512 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.289517 | orchestrator | 2026-01-13 00:48:11.289522 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-13 00:48:11.289527 | orchestrator | Tuesday 13 January 2026 00:47:47 +0000 (0:00:00.339) 0:03:59.086 ******* 2026-01-13 00:48:11.289532 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.289538 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.289543 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.289549 | orchestrator | 2026-01-13 00:48:11.289554 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-13 00:48:11.289559 | orchestrator | 2026-01-13 00:48:11.289564 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-13 00:48:11.289568 | orchestrator | Tuesday 13 January 2026 00:47:47 +0000 (0:00:00.952) 0:04:00.039 ******* 2026-01-13 00:48:11.289571 | orchestrator | ok: [testbed-manager] 2026-01-13 00:48:11.289574 | orchestrator | 2026-01-13 00:48:11.289578 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-13 00:48:11.289582 | orchestrator | Tuesday 13 January 2026 00:47:48 +0000 (0:00:00.131) 0:04:00.171 ******* 2026-01-13 00:48:11.289585 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-13 00:48:11.289589 | orchestrator | 2026-01-13 00:48:11.289592 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-13 00:48:11.289596 | orchestrator | Tuesday 13 January 2026 00:47:48 +0000 (0:00:00.195) 0:04:00.366 ******* 2026-01-13 00:48:11.289599 | orchestrator | changed: [testbed-manager] 2026-01-13 00:48:11.289602 | orchestrator | 2026-01-13 00:48:11.289606 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-13 00:48:11.289609 | orchestrator | 2026-01-13 00:48:11.289613 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-13 00:48:11.289616 | orchestrator | Tuesday 13 January 2026 00:47:54 +0000 (0:00:06.380) 0:04:06.747 ******* 2026-01-13 00:48:11.289620 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:48:11.289623 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:48:11.289627 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:48:11.289630 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:11.289634 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:11.289637 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:11.289641 | orchestrator | 2026-01-13 00:48:11.289644 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-13 00:48:11.289653 | orchestrator | Tuesday 13 January 2026 00:47:55 +0000 (0:00:01.215) 0:04:07.962 ******* 2026-01-13 00:48:11.289656 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-13 00:48:11.289660 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-13 00:48:11.289663 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-13 00:48:11.289667 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-13 00:48:11.289670 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-13 00:48:11.289673 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-13 00:48:11.289677 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-13 00:48:11.289680 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-13 00:48:11.289684 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-13 00:48:11.289687 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-13 00:48:11.289691 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-13 00:48:11.289694 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-13 00:48:11.289704 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-13 00:48:11.289707 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-13 00:48:11.289711 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-13 00:48:11.289714 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-13 00:48:11.289718 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-13 00:48:11.289722 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-13 00:48:11.289727 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-13 00:48:11.289732 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-13 00:48:11.289737 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-13 00:48:11.289742 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-13 00:48:11.289747 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-13 00:48:11.289751 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-13 00:48:11.289756 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-13 00:48:11.289764 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-13 00:48:11.289768 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-13 00:48:11.289773 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-13 00:48:11.289778 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-13 00:48:11.289782 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-13 00:48:11.289787 | orchestrator | 2026-01-13 00:48:11.289792 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-13 00:48:11.289797 | orchestrator | Tuesday 13 January 2026 00:48:07 +0000 (0:00:11.720) 0:04:19.682 ******* 2026-01-13 00:48:11.289801 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.289806 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.289814 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.289819 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.289824 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.289829 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.289834 | orchestrator | 2026-01-13 00:48:11.289840 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-13 00:48:11.289846 | orchestrator | Tuesday 13 January 2026 00:48:08 +0000 (0:00:00.569) 0:04:20.252 ******* 2026-01-13 00:48:11.289851 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:48:11.289856 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:48:11.289861 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:48:11.289865 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:11.289869 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:11.289872 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:11.289876 | orchestrator | 2026-01-13 00:48:11.289879 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:48:11.289883 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:48:11.289888 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-13 00:48:11.289891 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-13 00:48:11.289894 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-13 00:48:11.289897 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-13 00:48:11.289900 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-13 00:48:11.289903 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-13 00:48:11.289906 | orchestrator | 2026-01-13 00:48:11.289909 | orchestrator | 2026-01-13 00:48:11.289913 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:48:11.289916 | orchestrator | Tuesday 13 January 2026 00:48:08 +0000 (0:00:00.410) 0:04:20.662 ******* 2026-01-13 00:48:11.289919 | orchestrator | =============================================================================== 2026-01-13 00:48:11.289922 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.61s 2026-01-13 00:48:11.289926 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.20s 2026-01-13 00:48:11.289932 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.64s 2026-01-13 00:48:11.289938 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.79s 2026-01-13 00:48:11.289941 | orchestrator | kubectl : Install required packages ------------------------------------ 11.73s 2026-01-13 00:48:11.289944 | orchestrator | Manage labels ---------------------------------------------------------- 11.72s 2026-01-13 00:48:11.289947 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.86s 2026-01-13 00:48:11.289950 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.38s 2026-01-13 00:48:11.289953 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.99s 2026-01-13 00:48:11.289956 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.34s 2026-01-13 00:48:11.289959 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.95s 2026-01-13 00:48:11.289962 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.89s 2026-01-13 00:48:11.289968 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.85s 2026-01-13 00:48:11.289971 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.68s 2026-01-13 00:48:11.289974 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.16s 2026-01-13 00:48:11.289977 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.96s 2026-01-13 00:48:11.289980 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.95s 2026-01-13 00:48:11.289985 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.93s 2026-01-13 00:48:11.289989 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.92s 2026-01-13 00:48:11.289992 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.86s 2026-01-13 00:48:11.289995 | orchestrator | 2026-01-13 00:48:11 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:11.289998 | orchestrator | 2026-01-13 00:48:11 | INFO  | Task 33a1d39c-6cf7-41c4-b1e3-f05ff55a3212 is in state STARTED 2026-01-13 00:48:11.290289 | orchestrator | 2026-01-13 00:48:11 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:11.294238 | orchestrator | 2026-01-13 00:48:11 | INFO  | Task 15cdb02a-9808-43f4-bc82-148fef832dc1 is in state STARTED 2026-01-13 00:48:11.294287 | orchestrator | 2026-01-13 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:14.332973 | orchestrator | 2026-01-13 00:48:14 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:48:14.333025 | orchestrator | 2026-01-13 00:48:14 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:14.355707 | orchestrator | 2026-01-13 00:48:14 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:14.355753 | orchestrator | 2026-01-13 00:48:14 | INFO  | Task 33a1d39c-6cf7-41c4-b1e3-f05ff55a3212 is in state STARTED 2026-01-13 00:48:14.355761 | orchestrator | 2026-01-13 00:48:14 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:14.355767 | orchestrator | 2026-01-13 00:48:14 | INFO  | Task 15cdb02a-9808-43f4-bc82-148fef832dc1 is in state STARTED 2026-01-13 00:48:14.355773 | orchestrator | 2026-01-13 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:17.368444 | orchestrator | 2026-01-13 00:48:17 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:48:17.368613 | orchestrator | 2026-01-13 00:48:17 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:17.368622 | orchestrator | 2026-01-13 00:48:17 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:17.368630 | orchestrator | 2026-01-13 00:48:17 | INFO  | Task 33a1d39c-6cf7-41c4-b1e3-f05ff55a3212 is in state STARTED 2026-01-13 00:48:17.370064 | orchestrator | 2026-01-13 00:48:17 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:17.370122 | orchestrator | 2026-01-13 00:48:17 | INFO  | Task 15cdb02a-9808-43f4-bc82-148fef832dc1 is in state SUCCESS 2026-01-13 00:48:17.370132 | orchestrator | 2026-01-13 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:20.402327 | orchestrator | 2026-01-13 00:48:20 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:48:20.404784 | orchestrator | 2026-01-13 00:48:20 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:20.406779 | orchestrator | 2026-01-13 00:48:20 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:20.407029 | orchestrator | 2026-01-13 00:48:20 | INFO  | Task 33a1d39c-6cf7-41c4-b1e3-f05ff55a3212 is in state SUCCESS 2026-01-13 00:48:20.408732 | orchestrator | 2026-01-13 00:48:20 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:20.408789 | orchestrator | 2026-01-13 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:23.448961 | orchestrator | 2026-01-13 00:48:23 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:48:23.450339 | orchestrator | 2026-01-13 00:48:23 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:23.452180 | orchestrator | 2026-01-13 00:48:23 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:23.453100 | orchestrator | 2026-01-13 00:48:23 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:23.453130 | orchestrator | 2026-01-13 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:26.485294 | orchestrator | 2026-01-13 00:48:26 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:48:26.488261 | orchestrator | 2026-01-13 00:48:26 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:26.490283 | orchestrator | 2026-01-13 00:48:26 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:26.492746 | orchestrator | 2026-01-13 00:48:26 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:26.492830 | orchestrator | 2026-01-13 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:29.538246 | orchestrator | 2026-01-13 00:48:29 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:48:29.538306 | orchestrator | 2026-01-13 00:48:29 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:29.539115 | orchestrator | 2026-01-13 00:48:29 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:29.540207 | orchestrator | 2026-01-13 00:48:29 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:29.540233 | orchestrator | 2026-01-13 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:32.577942 | orchestrator | 2026-01-13 00:48:32 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:48:32.578145 | orchestrator | 2026-01-13 00:48:32 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:32.578166 | orchestrator | 2026-01-13 00:48:32 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:32.578183 | orchestrator | 2026-01-13 00:48:32 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:32.578190 | orchestrator | 2026-01-13 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:35.619422 | orchestrator | 2026-01-13 00:48:35 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:48:35.619637 | orchestrator | 2026-01-13 00:48:35 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:35.620969 | orchestrator | 2026-01-13 00:48:35 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:35.622691 | orchestrator | 2026-01-13 00:48:35 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:35.622748 | orchestrator | 2026-01-13 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:38.654973 | orchestrator | 2026-01-13 00:48:38 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state STARTED 2026-01-13 00:48:38.656116 | orchestrator | 2026-01-13 00:48:38 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:38.656896 | orchestrator | 2026-01-13 00:48:38 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:38.660209 | orchestrator | 2026-01-13 00:48:38 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:38.660246 | orchestrator | 2026-01-13 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:41.713178 | orchestrator | 2026-01-13 00:48:41 | INFO  | Task e398c6f6-1f7a-47da-aad9-f020dc0c55f2 is in state SUCCESS 2026-01-13 00:48:41.714618 | orchestrator | 2026-01-13 00:48:41.714669 | orchestrator | 2026-01-13 00:48:41.714680 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-13 00:48:41.714689 | orchestrator | 2026-01-13 00:48:41.714698 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-13 00:48:41.714707 | orchestrator | Tuesday 13 January 2026 00:48:13 +0000 (0:00:00.176) 0:00:00.176 ******* 2026-01-13 00:48:41.714715 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-13 00:48:41.714723 | orchestrator | 2026-01-13 00:48:41.714732 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-13 00:48:41.714740 | orchestrator | Tuesday 13 January 2026 00:48:14 +0000 (0:00:00.718) 0:00:00.895 ******* 2026-01-13 00:48:41.714745 | orchestrator | changed: [testbed-manager] 2026-01-13 00:48:41.714751 | orchestrator | 2026-01-13 00:48:41.714756 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-13 00:48:41.714761 | orchestrator | Tuesday 13 January 2026 00:48:15 +0000 (0:00:00.994) 0:00:01.890 ******* 2026-01-13 00:48:41.714765 | orchestrator | changed: [testbed-manager] 2026-01-13 00:48:41.714770 | orchestrator | 2026-01-13 00:48:41.714775 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:48:41.714780 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:48:41.714785 | orchestrator | 2026-01-13 00:48:41.714790 | orchestrator | 2026-01-13 00:48:41.714795 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:48:41.714800 | orchestrator | Tuesday 13 January 2026 00:48:15 +0000 (0:00:00.413) 0:00:02.303 ******* 2026-01-13 00:48:41.714805 | orchestrator | =============================================================================== 2026-01-13 00:48:41.714809 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.99s 2026-01-13 00:48:41.714814 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2026-01-13 00:48:41.714819 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.41s 2026-01-13 00:48:41.714823 | orchestrator | 2026-01-13 00:48:41.714828 | orchestrator | 2026-01-13 00:48:41.714841 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-13 00:48:41.714846 | orchestrator | 2026-01-13 00:48:41.714851 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-13 00:48:41.714856 | orchestrator | Tuesday 13 January 2026 00:48:13 +0000 (0:00:00.151) 0:00:00.151 ******* 2026-01-13 00:48:41.714861 | orchestrator | ok: [testbed-manager] 2026-01-13 00:48:41.714866 | orchestrator | 2026-01-13 00:48:41.714871 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-13 00:48:41.714875 | orchestrator | Tuesday 13 January 2026 00:48:13 +0000 (0:00:00.474) 0:00:00.625 ******* 2026-01-13 00:48:41.714880 | orchestrator | ok: [testbed-manager] 2026-01-13 00:48:41.714885 | orchestrator | 2026-01-13 00:48:41.714889 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-13 00:48:41.714894 | orchestrator | Tuesday 13 January 2026 00:48:14 +0000 (0:00:00.776) 0:00:01.402 ******* 2026-01-13 00:48:41.714911 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-13 00:48:41.714917 | orchestrator | 2026-01-13 00:48:41.714922 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-13 00:48:41.714927 | orchestrator | Tuesday 13 January 2026 00:48:15 +0000 (0:00:00.690) 0:00:02.093 ******* 2026-01-13 00:48:41.714932 | orchestrator | changed: [testbed-manager] 2026-01-13 00:48:41.714936 | orchestrator | 2026-01-13 00:48:41.714941 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-13 00:48:41.714946 | orchestrator | Tuesday 13 January 2026 00:48:16 +0000 (0:00:01.165) 0:00:03.258 ******* 2026-01-13 00:48:41.714951 | orchestrator | changed: [testbed-manager] 2026-01-13 00:48:41.714955 | orchestrator | 2026-01-13 00:48:41.714960 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-13 00:48:41.714965 | orchestrator | Tuesday 13 January 2026 00:48:17 +0000 (0:00:00.477) 0:00:03.735 ******* 2026-01-13 00:48:41.714970 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-13 00:48:41.714975 | orchestrator | 2026-01-13 00:48:41.714979 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-13 00:48:41.714984 | orchestrator | Tuesday 13 January 2026 00:48:18 +0000 (0:00:01.412) 0:00:05.148 ******* 2026-01-13 00:48:41.714989 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-13 00:48:41.714993 | orchestrator | 2026-01-13 00:48:41.714998 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-13 00:48:41.715003 | orchestrator | Tuesday 13 January 2026 00:48:19 +0000 (0:00:00.756) 0:00:05.905 ******* 2026-01-13 00:48:41.715008 | orchestrator | ok: [testbed-manager] 2026-01-13 00:48:41.715014 | orchestrator | 2026-01-13 00:48:41.715022 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-13 00:48:41.715035 | orchestrator | Tuesday 13 January 2026 00:48:19 +0000 (0:00:00.414) 0:00:06.319 ******* 2026-01-13 00:48:41.715043 | orchestrator | ok: [testbed-manager] 2026-01-13 00:48:41.715050 | orchestrator | 2026-01-13 00:48:41.715058 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:48:41.715066 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:48:41.715073 | orchestrator | 2026-01-13 00:48:41.715081 | orchestrator | 2026-01-13 00:48:41.715089 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:48:41.715097 | orchestrator | Tuesday 13 January 2026 00:48:19 +0000 (0:00:00.221) 0:00:06.541 ******* 2026-01-13 00:48:41.715104 | orchestrator | =============================================================================== 2026-01-13 00:48:41.715114 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.41s 2026-01-13 00:48:41.715122 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.17s 2026-01-13 00:48:41.715130 | orchestrator | Create .kube directory -------------------------------------------------- 0.78s 2026-01-13 00:48:41.715149 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.76s 2026-01-13 00:48:41.715155 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.69s 2026-01-13 00:48:41.715159 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.48s 2026-01-13 00:48:41.715164 | orchestrator | Get home directory of operator user ------------------------------------- 0.47s 2026-01-13 00:48:41.715169 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.41s 2026-01-13 00:48:41.715174 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.22s 2026-01-13 00:48:41.715178 | orchestrator | 2026-01-13 00:48:41.715183 | orchestrator | 2026-01-13 00:48:41.715188 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-13 00:48:41.715193 | orchestrator | 2026-01-13 00:48:41.715197 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-13 00:48:41.715202 | orchestrator | Tuesday 13 January 2026 00:46:30 +0000 (0:00:00.188) 0:00:00.188 ******* 2026-01-13 00:48:41.715214 | orchestrator | ok: [localhost] => { 2026-01-13 00:48:41.715220 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-13 00:48:41.715226 | orchestrator | } 2026-01-13 00:48:41.715232 | orchestrator | 2026-01-13 00:48:41.715237 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-13 00:48:41.715242 | orchestrator | Tuesday 13 January 2026 00:46:30 +0000 (0:00:00.072) 0:00:00.261 ******* 2026-01-13 00:48:41.715248 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-13 00:48:41.715255 | orchestrator | ...ignoring 2026-01-13 00:48:41.715341 | orchestrator | 2026-01-13 00:48:41.715350 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-13 00:48:41.715356 | orchestrator | Tuesday 13 January 2026 00:46:33 +0000 (0:00:02.875) 0:00:03.136 ******* 2026-01-13 00:48:41.715361 | orchestrator | skipping: [localhost] 2026-01-13 00:48:41.715366 | orchestrator | 2026-01-13 00:48:41.715376 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-13 00:48:41.715381 | orchestrator | Tuesday 13 January 2026 00:46:33 +0000 (0:00:00.050) 0:00:03.186 ******* 2026-01-13 00:48:41.715386 | orchestrator | ok: [localhost] 2026-01-13 00:48:41.715392 | orchestrator | 2026-01-13 00:48:41.715397 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 00:48:41.715402 | orchestrator | 2026-01-13 00:48:41.715408 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 00:48:41.715413 | orchestrator | Tuesday 13 January 2026 00:46:33 +0000 (0:00:00.164) 0:00:03.351 ******* 2026-01-13 00:48:41.715418 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:41.715424 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:41.715447 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:41.715453 | orchestrator | 2026-01-13 00:48:41.715459 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 00:48:41.715467 | orchestrator | Tuesday 13 January 2026 00:46:33 +0000 (0:00:00.312) 0:00:03.664 ******* 2026-01-13 00:48:41.715475 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-13 00:48:41.715483 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-13 00:48:41.715492 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-13 00:48:41.715499 | orchestrator | 2026-01-13 00:48:41.715507 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-13 00:48:41.715516 | orchestrator | 2026-01-13 00:48:41.715525 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-13 00:48:41.715534 | orchestrator | Tuesday 13 January 2026 00:46:34 +0000 (0:00:00.518) 0:00:04.183 ******* 2026-01-13 00:48:41.715542 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:48:41.715548 | orchestrator | 2026-01-13 00:48:41.715553 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-13 00:48:41.715559 | orchestrator | Tuesday 13 January 2026 00:46:34 +0000 (0:00:00.562) 0:00:04.746 ******* 2026-01-13 00:48:41.715565 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:41.715573 | orchestrator | 2026-01-13 00:48:41.715581 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-13 00:48:41.715590 | orchestrator | Tuesday 13 January 2026 00:46:35 +0000 (0:00:00.989) 0:00:05.736 ******* 2026-01-13 00:48:41.715597 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:41.715605 | orchestrator | 2026-01-13 00:48:41.715613 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-13 00:48:41.715621 | orchestrator | Tuesday 13 January 2026 00:46:35 +0000 (0:00:00.387) 0:00:06.123 ******* 2026-01-13 00:48:41.715629 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:41.715637 | orchestrator | 2026-01-13 00:48:41.715645 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-13 00:48:41.715661 | orchestrator | Tuesday 13 January 2026 00:46:36 +0000 (0:00:00.555) 0:00:06.679 ******* 2026-01-13 00:48:41.715670 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:41.715679 | orchestrator | 2026-01-13 00:48:41.715687 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-13 00:48:41.715696 | orchestrator | Tuesday 13 January 2026 00:46:37 +0000 (0:00:00.492) 0:00:07.172 ******* 2026-01-13 00:48:41.715705 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:41.715713 | orchestrator | 2026-01-13 00:48:41.715722 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-13 00:48:41.715731 | orchestrator | Tuesday 13 January 2026 00:46:37 +0000 (0:00:00.771) 0:00:07.943 ******* 2026-01-13 00:48:41.715740 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:48:41.715749 | orchestrator | 2026-01-13 00:48:41.715757 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-13 00:48:41.715775 | orchestrator | Tuesday 13 January 2026 00:46:38 +0000 (0:00:00.760) 0:00:08.704 ******* 2026-01-13 00:48:41.715784 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:41.715793 | orchestrator | 2026-01-13 00:48:41.715802 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-13 00:48:41.715811 | orchestrator | Tuesday 13 January 2026 00:46:39 +0000 (0:00:01.146) 0:00:09.850 ******* 2026-01-13 00:48:41.715819 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:41.715828 | orchestrator | 2026-01-13 00:48:41.715837 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-13 00:48:41.715846 | orchestrator | Tuesday 13 January 2026 00:46:40 +0000 (0:00:00.336) 0:00:10.186 ******* 2026-01-13 00:48:41.715854 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:41.715863 | orchestrator | 2026-01-13 00:48:41.715871 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-13 00:48:41.715878 | orchestrator | Tuesday 13 January 2026 00:46:40 +0000 (0:00:00.746) 0:00:10.933 ******* 2026-01-13 00:48:41.715896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-13 00:48:41.715909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-13 00:48:41.715926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-13 00:48:41.715935 | orchestrator | 2026-01-13 00:48:41.715944 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-13 00:48:41.715953 | orchestrator | Tuesday 13 January 2026 00:46:41 +0000 (0:00:00.958) 0:00:11.891 ******* 2026-01-13 00:48:41.715969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-13 00:48:41.715982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-13 00:48:41.715992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-13 00:48:41.716015 | orchestrator | 2026-01-13 00:48:41.716024 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-13 00:48:41.716033 | orchestrator | Tuesday 13 January 2026 00:46:43 +0000 (0:00:01.638) 0:00:13.529 ******* 2026-01-13 00:48:41.716042 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-13 00:48:41.716051 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-13 00:48:41.716060 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-13 00:48:41.716069 | orchestrator | 2026-01-13 00:48:41.716078 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-13 00:48:41.716087 | orchestrator | Tuesday 13 January 2026 00:46:45 +0000 (0:00:01.896) 0:00:15.426 ******* 2026-01-13 00:48:41.716096 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-13 00:48:41.716104 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-13 00:48:41.716113 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-13 00:48:41.716122 | orchestrator | 2026-01-13 00:48:41.716131 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-13 00:48:41.716143 | orchestrator | Tuesday 13 January 2026 00:46:47 +0000 (0:00:02.135) 0:00:17.562 ******* 2026-01-13 00:48:41.716152 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-13 00:48:41.716159 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-13 00:48:41.716166 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-13 00:48:41.716174 | orchestrator | 2026-01-13 00:48:41.716181 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-13 00:48:41.716189 | orchestrator | Tuesday 13 January 2026 00:46:49 +0000 (0:00:02.099) 0:00:19.662 ******* 2026-01-13 00:48:41.716197 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-13 00:48:41.716204 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-13 00:48:41.716211 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-13 00:48:41.716219 | orchestrator | 2026-01-13 00:48:41.716226 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-13 00:48:41.716232 | orchestrator | Tuesday 13 January 2026 00:46:51 +0000 (0:00:02.098) 0:00:21.760 ******* 2026-01-13 00:48:41.716239 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-13 00:48:41.716246 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-13 00:48:41.716254 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-13 00:48:41.716261 | orchestrator | 2026-01-13 00:48:41.716269 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-13 00:48:41.716276 | orchestrator | Tuesday 13 January 2026 00:46:53 +0000 (0:00:01.632) 0:00:23.393 ******* 2026-01-13 00:48:41.716289 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-13 00:48:41.716301 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-13 00:48:41.716311 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-13 00:48:41.716319 | orchestrator | 2026-01-13 00:48:41.716328 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-13 00:48:41.716336 | orchestrator | Tuesday 13 January 2026 00:46:54 +0000 (0:00:01.696) 0:00:25.090 ******* 2026-01-13 00:48:41.716344 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:41.716352 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:41.716360 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:41.716369 | orchestrator | 2026-01-13 00:48:41.716378 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-13 00:48:41.716387 | orchestrator | Tuesday 13 January 2026 00:46:55 +0000 (0:00:00.504) 0:00:25.594 ******* 2026-01-13 00:48:41.716507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-13 00:48:41.716528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-13 00:48:41.716539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-13 00:48:41.716557 | orchestrator | 2026-01-13 00:48:41.716567 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-13 00:48:41.716576 | orchestrator | Tuesday 13 January 2026 00:46:57 +0000 (0:00:02.301) 0:00:27.896 ******* 2026-01-13 00:48:41.716585 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:41.716594 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:41.716603 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:41.716610 | orchestrator | 2026-01-13 00:48:41.716623 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-13 00:48:41.716632 | orchestrator | Tuesday 13 January 2026 00:46:59 +0000 (0:00:01.363) 0:00:29.259 ******* 2026-01-13 00:48:41.716641 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:41.716649 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:41.716657 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:41.716663 | orchestrator | 2026-01-13 00:48:41.716671 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-13 00:48:41.716678 | orchestrator | Tuesday 13 January 2026 00:47:07 +0000 (0:00:08.407) 0:00:37.666 ******* 2026-01-13 00:48:41.716685 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:41.716693 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:41.716700 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:41.716708 | orchestrator | 2026-01-13 00:48:41.716715 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-13 00:48:41.716722 | orchestrator | 2026-01-13 00:48:41.716729 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-13 00:48:41.716736 | orchestrator | Tuesday 13 January 2026 00:47:07 +0000 (0:00:00.250) 0:00:37.917 ******* 2026-01-13 00:48:41.716745 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:41.716753 | orchestrator | 2026-01-13 00:48:41.716761 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-13 00:48:41.716769 | orchestrator | Tuesday 13 January 2026 00:47:08 +0000 (0:00:00.571) 0:00:38.489 ******* 2026-01-13 00:48:41.716777 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:48:41.716785 | orchestrator | 2026-01-13 00:48:41.716794 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-13 00:48:41.716803 | orchestrator | Tuesday 13 January 2026 00:47:08 +0000 (0:00:00.202) 0:00:38.691 ******* 2026-01-13 00:48:41.716811 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:41.716820 | orchestrator | 2026-01-13 00:48:41.716829 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-13 00:48:41.716837 | orchestrator | Tuesday 13 January 2026 00:47:10 +0000 (0:00:01.622) 0:00:40.313 ******* 2026-01-13 00:48:41.716846 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:48:41.716855 | orchestrator | 2026-01-13 00:48:41.716863 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-13 00:48:41.716870 | orchestrator | 2026-01-13 00:48:41.716878 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-13 00:48:41.716886 | orchestrator | Tuesday 13 January 2026 00:48:02 +0000 (0:00:51.987) 0:01:32.301 ******* 2026-01-13 00:48:41.716894 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:41.716902 | orchestrator | 2026-01-13 00:48:41.716910 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-13 00:48:41.716919 | orchestrator | Tuesday 13 January 2026 00:48:02 +0000 (0:00:00.818) 0:01:33.119 ******* 2026-01-13 00:48:41.716928 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:48:41.716936 | orchestrator | 2026-01-13 00:48:41.716945 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-13 00:48:41.716954 | orchestrator | Tuesday 13 January 2026 00:48:03 +0000 (0:00:00.277) 0:01:33.397 ******* 2026-01-13 00:48:41.716962 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:41.716977 | orchestrator | 2026-01-13 00:48:41.716985 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-13 00:48:41.716994 | orchestrator | Tuesday 13 January 2026 00:48:10 +0000 (0:00:07.097) 0:01:40.494 ******* 2026-01-13 00:48:41.717001 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:48:41.717009 | orchestrator | 2026-01-13 00:48:41.717017 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-13 00:48:41.717024 | orchestrator | 2026-01-13 00:48:41.717032 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-13 00:48:41.717039 | orchestrator | Tuesday 13 January 2026 00:48:20 +0000 (0:00:10.583) 0:01:51.078 ******* 2026-01-13 00:48:41.717048 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:41.717056 | orchestrator | 2026-01-13 00:48:41.717072 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-13 00:48:41.717078 | orchestrator | Tuesday 13 January 2026 00:48:21 +0000 (0:00:00.724) 0:01:51.803 ******* 2026-01-13 00:48:41.717083 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:48:41.717087 | orchestrator | 2026-01-13 00:48:41.717092 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-13 00:48:41.717097 | orchestrator | Tuesday 13 January 2026 00:48:21 +0000 (0:00:00.186) 0:01:51.989 ******* 2026-01-13 00:48:41.717101 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:41.717106 | orchestrator | 2026-01-13 00:48:41.717111 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-13 00:48:41.717116 | orchestrator | Tuesday 13 January 2026 00:48:23 +0000 (0:00:01.741) 0:01:53.731 ******* 2026-01-13 00:48:41.717121 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:48:41.717125 | orchestrator | 2026-01-13 00:48:41.717131 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-13 00:48:41.717136 | orchestrator | 2026-01-13 00:48:41.717142 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-13 00:48:41.717147 | orchestrator | Tuesday 13 January 2026 00:48:37 +0000 (0:00:14.303) 0:02:08.034 ******* 2026-01-13 00:48:41.717152 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:48:41.717160 | orchestrator | 2026-01-13 00:48:41.717169 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-13 00:48:41.717176 | orchestrator | Tuesday 13 January 2026 00:48:38 +0000 (0:00:00.518) 0:02:08.552 ******* 2026-01-13 00:48:41.717184 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-13 00:48:41.717193 | orchestrator | enable_outward_rabbitmq_True 2026-01-13 00:48:41.717201 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-13 00:48:41.717210 | orchestrator | outward_rabbitmq_restart 2026-01-13 00:48:41.717219 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:48:41.717228 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:48:41.717236 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:48:41.717244 | orchestrator | 2026-01-13 00:48:41.717253 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-13 00:48:41.717262 | orchestrator | skipping: no hosts matched 2026-01-13 00:48:41.717271 | orchestrator | 2026-01-13 00:48:41.717286 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-13 00:48:41.717295 | orchestrator | skipping: no hosts matched 2026-01-13 00:48:41.717304 | orchestrator | 2026-01-13 00:48:41.717314 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-13 00:48:41.717324 | orchestrator | skipping: no hosts matched 2026-01-13 00:48:41.717334 | orchestrator | 2026-01-13 00:48:41.717343 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:48:41.717353 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-13 00:48:41.717363 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-13 00:48:41.717379 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:48:41.717389 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:48:41.717400 | orchestrator | 2026-01-13 00:48:41.717410 | orchestrator | 2026-01-13 00:48:41.717419 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:48:41.717449 | orchestrator | Tuesday 13 January 2026 00:48:40 +0000 (0:00:02.145) 0:02:10.698 ******* 2026-01-13 00:48:41.717458 | orchestrator | =============================================================================== 2026-01-13 00:48:41.717467 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 76.87s 2026-01-13 00:48:41.717480 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.46s 2026-01-13 00:48:41.717489 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.41s 2026-01-13 00:48:41.717500 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.88s 2026-01-13 00:48:41.717510 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.30s 2026-01-13 00:48:41.717523 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.15s 2026-01-13 00:48:41.717532 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.14s 2026-01-13 00:48:41.717542 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.12s 2026-01-13 00:48:41.717552 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.10s 2026-01-13 00:48:41.717561 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.10s 2026-01-13 00:48:41.717571 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.90s 2026-01-13 00:48:41.717580 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.70s 2026-01-13 00:48:41.717590 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.64s 2026-01-13 00:48:41.717601 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.63s 2026-01-13 00:48:41.717609 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.36s 2026-01-13 00:48:41.717618 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.15s 2026-01-13 00:48:41.717627 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.99s 2026-01-13 00:48:41.717643 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.96s 2026-01-13 00:48:41.717652 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.77s 2026-01-13 00:48:41.717660 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.76s 2026-01-13 00:48:41.717668 | orchestrator | 2026-01-13 00:48:41 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:41.717676 | orchestrator | 2026-01-13 00:48:41 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:41.718110 | orchestrator | 2026-01-13 00:48:41 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:41.718144 | orchestrator | 2026-01-13 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:44.763905 | orchestrator | 2026-01-13 00:48:44 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:44.764702 | orchestrator | 2026-01-13 00:48:44 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:44.767240 | orchestrator | 2026-01-13 00:48:44 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:44.767410 | orchestrator | 2026-01-13 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:47.825275 | orchestrator | 2026-01-13 00:48:47 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:47.827500 | orchestrator | 2026-01-13 00:48:47 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:47.829363 | orchestrator | 2026-01-13 00:48:47 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:47.829440 | orchestrator | 2026-01-13 00:48:47 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:50.885779 | orchestrator | 2026-01-13 00:48:50 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:50.886993 | orchestrator | 2026-01-13 00:48:50 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:50.890272 | orchestrator | 2026-01-13 00:48:50 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:50.890598 | orchestrator | 2026-01-13 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:53.930363 | orchestrator | 2026-01-13 00:48:53 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:53.933206 | orchestrator | 2026-01-13 00:48:53 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:53.933265 | orchestrator | 2026-01-13 00:48:53 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:53.933275 | orchestrator | 2026-01-13 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:48:56.963568 | orchestrator | 2026-01-13 00:48:56 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:48:56.963689 | orchestrator | 2026-01-13 00:48:56 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:48:56.964606 | orchestrator | 2026-01-13 00:48:56 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:48:56.964647 | orchestrator | 2026-01-13 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:00.002916 | orchestrator | 2026-01-13 00:49:00 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:00.003007 | orchestrator | 2026-01-13 00:49:00 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:49:00.003019 | orchestrator | 2026-01-13 00:49:00 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:00.003027 | orchestrator | 2026-01-13 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:03.045868 | orchestrator | 2026-01-13 00:49:03 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:03.048908 | orchestrator | 2026-01-13 00:49:03 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:49:03.052811 | orchestrator | 2026-01-13 00:49:03 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:03.052873 | orchestrator | 2026-01-13 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:06.092112 | orchestrator | 2026-01-13 00:49:06 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:06.092261 | orchestrator | 2026-01-13 00:49:06 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:49:06.093099 | orchestrator | 2026-01-13 00:49:06 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:06.093287 | orchestrator | 2026-01-13 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:09.138340 | orchestrator | 2026-01-13 00:49:09 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:09.140553 | orchestrator | 2026-01-13 00:49:09 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:49:09.141916 | orchestrator | 2026-01-13 00:49:09 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:09.142185 | orchestrator | 2026-01-13 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:12.168145 | orchestrator | 2026-01-13 00:49:12 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:12.168207 | orchestrator | 2026-01-13 00:49:12 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:49:12.168218 | orchestrator | 2026-01-13 00:49:12 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:12.168228 | orchestrator | 2026-01-13 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:15.213195 | orchestrator | 2026-01-13 00:49:15 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:15.215465 | orchestrator | 2026-01-13 00:49:15 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:49:15.216546 | orchestrator | 2026-01-13 00:49:15 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:15.216616 | orchestrator | 2026-01-13 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:18.261145 | orchestrator | 2026-01-13 00:49:18 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:18.262972 | orchestrator | 2026-01-13 00:49:18 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:49:18.265065 | orchestrator | 2026-01-13 00:49:18 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:18.265114 | orchestrator | 2026-01-13 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:21.305001 | orchestrator | 2026-01-13 00:49:21 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:21.306270 | orchestrator | 2026-01-13 00:49:21 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:49:21.310088 | orchestrator | 2026-01-13 00:49:21 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:21.310501 | orchestrator | 2026-01-13 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:24.345980 | orchestrator | 2026-01-13 00:49:24 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:24.346805 | orchestrator | 2026-01-13 00:49:24 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:49:24.348339 | orchestrator | 2026-01-13 00:49:24 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:24.348455 | orchestrator | 2026-01-13 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:27.388461 | orchestrator | 2026-01-13 00:49:27 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:27.390869 | orchestrator | 2026-01-13 00:49:27 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state STARTED 2026-01-13 00:49:27.392542 | orchestrator | 2026-01-13 00:49:27 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:27.394070 | orchestrator | 2026-01-13 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:30.432045 | orchestrator | 2026-01-13 00:49:30 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:30.436911 | orchestrator | 2026-01-13 00:49:30 | INFO  | Task 3ac9bf31-ed84-4bd5-a171-9bf739c5a717 is in state SUCCESS 2026-01-13 00:49:30.437128 | orchestrator | 2026-01-13 00:49:30.439154 | orchestrator | 2026-01-13 00:49:30.439204 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 00:49:30.439210 | orchestrator | 2026-01-13 00:49:30.439214 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 00:49:30.439219 | orchestrator | Tuesday 13 January 2026 00:47:15 +0000 (0:00:00.147) 0:00:00.147 ******* 2026-01-13 00:49:30.439223 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:49:30.439228 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:49:30.439232 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:49:30.439235 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.439239 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.439243 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.439247 | orchestrator | 2026-01-13 00:49:30.439250 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 00:49:30.439254 | orchestrator | Tuesday 13 January 2026 00:47:16 +0000 (0:00:00.820) 0:00:00.968 ******* 2026-01-13 00:49:30.439258 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-13 00:49:30.439262 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-13 00:49:30.439266 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-13 00:49:30.439270 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-13 00:49:30.439273 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-13 00:49:30.439277 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-13 00:49:30.439281 | orchestrator | 2026-01-13 00:49:30.439284 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-13 00:49:30.439289 | orchestrator | 2026-01-13 00:49:30.439295 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-13 00:49:30.439300 | orchestrator | Tuesday 13 January 2026 00:47:17 +0000 (0:00:00.829) 0:00:01.797 ******* 2026-01-13 00:49:30.439336 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:49:30.439359 | orchestrator | 2026-01-13 00:49:30.439365 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-13 00:49:30.439399 | orchestrator | Tuesday 13 January 2026 00:47:18 +0000 (0:00:00.925) 0:00:02.723 ******* 2026-01-13 00:49:30.439410 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439777 | orchestrator | 2026-01-13 00:49:30.439791 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-13 00:49:30.439795 | orchestrator | Tuesday 13 January 2026 00:47:19 +0000 (0:00:00.944) 0:00:03.667 ******* 2026-01-13 00:49:30.439799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439803 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439832 | orchestrator | 2026-01-13 00:49:30.439839 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-13 00:49:30.439897 | orchestrator | Tuesday 13 January 2026 00:47:20 +0000 (0:00:01.456) 0:00:05.124 ******* 2026-01-13 00:49:30.439908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439914 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439925 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439950 | orchestrator | 2026-01-13 00:49:30.439956 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-13 00:49:30.439962 | orchestrator | Tuesday 13 January 2026 00:47:21 +0000 (0:00:01.187) 0:00:06.311 ******* 2026-01-13 00:49:30.439968 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439980 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439992 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.439998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.440004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.440010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.440015 | orchestrator | 2026-01-13 00:49:30.440026 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-13 00:49:30.440032 | orchestrator | Tuesday 13 January 2026 00:47:23 +0000 (0:00:02.037) 0:00:08.349 ******* 2026-01-13 00:49:30.440038 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.440044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.440050 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.440056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.440066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.440076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.440082 | orchestrator | 2026-01-13 00:49:30.440088 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-13 00:49:30.440094 | orchestrator | Tuesday 13 January 2026 00:47:25 +0000 (0:00:01.860) 0:00:10.210 ******* 2026-01-13 00:49:30.440101 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:49:30.440108 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:49:30.440113 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:49:30.440119 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:49:30.440126 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:49:30.440132 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:49:30.440138 | orchestrator | 2026-01-13 00:49:30.440143 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-13 00:49:30.440149 | orchestrator | Tuesday 13 January 2026 00:47:28 +0000 (0:00:02.695) 0:00:12.906 ******* 2026-01-13 00:49:30.440155 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-13 00:49:30.440163 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-13 00:49:30.440169 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-13 00:49:30.440175 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-13 00:49:30.440181 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-13 00:49:30.440187 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-13 00:49:30.440193 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-13 00:49:30.440199 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-13 00:49:30.440210 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-13 00:49:30.440218 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-13 00:49:30.440223 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-13 00:49:30.440226 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-13 00:49:30.440230 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-13 00:49:30.440236 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-13 00:49:30.440240 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-13 00:49:30.440243 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-13 00:49:30.440247 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-13 00:49:30.440255 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-13 00:49:30.440259 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-13 00:49:30.440264 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-13 00:49:30.440267 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-13 00:49:30.440271 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-13 00:49:30.440275 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-13 00:49:30.440278 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-13 00:49:30.440282 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-13 00:49:30.440289 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-13 00:49:30.440293 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-13 00:49:30.440297 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-13 00:49:30.440301 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-13 00:49:30.440304 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-13 00:49:30.440308 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-13 00:49:30.440312 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-13 00:49:30.440315 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-13 00:49:30.440319 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-13 00:49:30.440323 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-13 00:49:30.440326 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-13 00:49:30.440330 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-13 00:49:30.440334 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-13 00:49:30.440338 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-13 00:49:30.440362 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-13 00:49:30.440365 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-13 00:49:30.440369 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-13 00:49:30.440373 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-13 00:49:30.440378 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-13 00:49:30.440385 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-13 00:49:30.440389 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-13 00:49:30.440397 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-13 00:49:30.440401 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-13 00:49:30.440404 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-13 00:49:30.440410 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-13 00:49:30.440416 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-13 00:49:30.440423 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-13 00:49:30.440431 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-13 00:49:30.440438 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-13 00:49:30.440443 | orchestrator | 2026-01-13 00:49:30.440449 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-13 00:49:30.440454 | orchestrator | Tuesday 13 January 2026 00:47:47 +0000 (0:00:18.840) 0:00:31.746 ******* 2026-01-13 00:49:30.440460 | orchestrator | 2026-01-13 00:49:30.440466 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-13 00:49:30.440471 | orchestrator | Tuesday 13 January 2026 00:47:47 +0000 (0:00:00.050) 0:00:31.797 ******* 2026-01-13 00:49:30.440476 | orchestrator | 2026-01-13 00:49:30.440482 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-13 00:49:30.440487 | orchestrator | Tuesday 13 January 2026 00:47:47 +0000 (0:00:00.047) 0:00:31.844 ******* 2026-01-13 00:49:30.440493 | orchestrator | 2026-01-13 00:49:30.440499 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-13 00:49:30.440505 | orchestrator | Tuesday 13 January 2026 00:47:47 +0000 (0:00:00.053) 0:00:31.897 ******* 2026-01-13 00:49:30.440510 | orchestrator | 2026-01-13 00:49:30.440516 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-13 00:49:30.440525 | orchestrator | Tuesday 13 January 2026 00:47:47 +0000 (0:00:00.051) 0:00:31.949 ******* 2026-01-13 00:49:30.440531 | orchestrator | 2026-01-13 00:49:30.440537 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-13 00:49:30.440542 | orchestrator | Tuesday 13 January 2026 00:47:47 +0000 (0:00:00.049) 0:00:31.999 ******* 2026-01-13 00:49:30.440549 | orchestrator | 2026-01-13 00:49:30.440553 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-13 00:49:30.440557 | orchestrator | Tuesday 13 January 2026 00:47:47 +0000 (0:00:00.049) 0:00:32.048 ******* 2026-01-13 00:49:30.440563 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.440569 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:49:30.440574 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.440578 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.440582 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:49:30.440586 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:49:30.440589 | orchestrator | 2026-01-13 00:49:30.440593 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-13 00:49:30.440597 | orchestrator | Tuesday 13 January 2026 00:47:49 +0000 (0:00:02.017) 0:00:34.066 ******* 2026-01-13 00:49:30.440601 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:49:30.440605 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:49:30.440608 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:49:30.440612 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:49:30.440615 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:49:30.440619 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:49:30.440623 | orchestrator | 2026-01-13 00:49:30.440626 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-13 00:49:30.440635 | orchestrator | 2026-01-13 00:49:30.440639 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-13 00:49:30.440643 | orchestrator | Tuesday 13 January 2026 00:48:18 +0000 (0:00:28.701) 0:01:02.768 ******* 2026-01-13 00:49:30.440647 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:49:30.440651 | orchestrator | 2026-01-13 00:49:30.440654 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-13 00:49:30.440658 | orchestrator | Tuesday 13 January 2026 00:48:19 +0000 (0:00:00.706) 0:01:03.474 ******* 2026-01-13 00:49:30.440662 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:49:30.440665 | orchestrator | 2026-01-13 00:49:30.440669 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-13 00:49:30.440673 | orchestrator | Tuesday 13 January 2026 00:48:19 +0000 (0:00:00.410) 0:01:03.885 ******* 2026-01-13 00:49:30.440676 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.440680 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.440684 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.440688 | orchestrator | 2026-01-13 00:49:30.440691 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-13 00:49:30.440695 | orchestrator | Tuesday 13 January 2026 00:48:20 +0000 (0:00:00.941) 0:01:04.826 ******* 2026-01-13 00:49:30.440699 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.440703 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.440706 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.440710 | orchestrator | 2026-01-13 00:49:30.440717 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-13 00:49:30.440721 | orchestrator | Tuesday 13 January 2026 00:48:20 +0000 (0:00:00.290) 0:01:05.116 ******* 2026-01-13 00:49:30.440725 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.440728 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.440732 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.440736 | orchestrator | 2026-01-13 00:49:30.440739 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-13 00:49:30.440743 | orchestrator | Tuesday 13 January 2026 00:48:20 +0000 (0:00:00.265) 0:01:05.382 ******* 2026-01-13 00:49:30.440747 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.440750 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.440754 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.440758 | orchestrator | 2026-01-13 00:49:30.440762 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-13 00:49:30.440765 | orchestrator | Tuesday 13 January 2026 00:48:21 +0000 (0:00:00.299) 0:01:05.681 ******* 2026-01-13 00:49:30.440769 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.440773 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.440776 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.440780 | orchestrator | 2026-01-13 00:49:30.440784 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-13 00:49:30.440788 | orchestrator | Tuesday 13 January 2026 00:48:21 +0000 (0:00:00.381) 0:01:06.063 ******* 2026-01-13 00:49:30.440791 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.440795 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.440799 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.440802 | orchestrator | 2026-01-13 00:49:30.440806 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-13 00:49:30.440810 | orchestrator | Tuesday 13 January 2026 00:48:21 +0000 (0:00:00.253) 0:01:06.317 ******* 2026-01-13 00:49:30.440814 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.440817 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.440821 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.440825 | orchestrator | 2026-01-13 00:49:30.440828 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-13 00:49:30.440832 | orchestrator | Tuesday 13 January 2026 00:48:22 +0000 (0:00:00.265) 0:01:06.582 ******* 2026-01-13 00:49:30.440839 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.440843 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.440847 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.440851 | orchestrator | 2026-01-13 00:49:30.440854 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-13 00:49:30.440858 | orchestrator | Tuesday 13 January 2026 00:48:22 +0000 (0:00:00.249) 0:01:06.832 ******* 2026-01-13 00:49:30.440862 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.440865 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.440869 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.440873 | orchestrator | 2026-01-13 00:49:30.440882 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-13 00:49:30.440886 | orchestrator | Tuesday 13 January 2026 00:48:22 +0000 (0:00:00.351) 0:01:07.184 ******* 2026-01-13 00:49:30.440890 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.440894 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.440897 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.440901 | orchestrator | 2026-01-13 00:49:30.440905 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-13 00:49:30.440908 | orchestrator | Tuesday 13 January 2026 00:48:22 +0000 (0:00:00.254) 0:01:07.439 ******* 2026-01-13 00:49:30.440912 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.440916 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.440919 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.440923 | orchestrator | 2026-01-13 00:49:30.440927 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-13 00:49:30.440930 | orchestrator | Tuesday 13 January 2026 00:48:23 +0000 (0:00:00.231) 0:01:07.670 ******* 2026-01-13 00:49:30.440934 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.440938 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.440941 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.440945 | orchestrator | 2026-01-13 00:49:30.440949 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-13 00:49:30.440952 | orchestrator | Tuesday 13 January 2026 00:48:23 +0000 (0:00:00.257) 0:01:07.928 ******* 2026-01-13 00:49:30.440956 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.440960 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.440963 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.440967 | orchestrator | 2026-01-13 00:49:30.440971 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-13 00:49:30.440975 | orchestrator | Tuesday 13 January 2026 00:48:23 +0000 (0:00:00.369) 0:01:08.297 ******* 2026-01-13 00:49:30.440978 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.440982 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.440986 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.440989 | orchestrator | 2026-01-13 00:49:30.440993 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-13 00:49:30.440997 | orchestrator | Tuesday 13 January 2026 00:48:24 +0000 (0:00:00.326) 0:01:08.624 ******* 2026-01-13 00:49:30.441000 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.441004 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.441008 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.441011 | orchestrator | 2026-01-13 00:49:30.441015 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-13 00:49:30.441019 | orchestrator | Tuesday 13 January 2026 00:48:24 +0000 (0:00:00.352) 0:01:08.976 ******* 2026-01-13 00:49:30.441068 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.441078 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.441084 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.441090 | orchestrator | 2026-01-13 00:49:30.441096 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-13 00:49:30.441102 | orchestrator | Tuesday 13 January 2026 00:48:24 +0000 (0:00:00.301) 0:01:09.278 ******* 2026-01-13 00:49:30.441114 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.441121 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.441131 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.441137 | orchestrator | 2026-01-13 00:49:30.441144 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-13 00:49:30.441150 | orchestrator | Tuesday 13 January 2026 00:48:25 +0000 (0:00:00.343) 0:01:09.622 ******* 2026-01-13 00:49:30.441156 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:49:30.441163 | orchestrator | 2026-01-13 00:49:30.441169 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-13 00:49:30.441175 | orchestrator | Tuesday 13 January 2026 00:48:26 +0000 (0:00:00.913) 0:01:10.536 ******* 2026-01-13 00:49:30.441182 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.441188 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.441194 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.441201 | orchestrator | 2026-01-13 00:49:30.441207 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-13 00:49:30.441213 | orchestrator | Tuesday 13 January 2026 00:48:26 +0000 (0:00:00.466) 0:01:11.002 ******* 2026-01-13 00:49:30.441220 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.441226 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.441232 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.441239 | orchestrator | 2026-01-13 00:49:30.441245 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-13 00:49:30.441252 | orchestrator | Tuesday 13 January 2026 00:48:26 +0000 (0:00:00.452) 0:01:11.455 ******* 2026-01-13 00:49:30.441258 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.441265 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.441271 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.441278 | orchestrator | 2026-01-13 00:49:30.441285 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-13 00:49:30.441288 | orchestrator | Tuesday 13 January 2026 00:48:27 +0000 (0:00:00.586) 0:01:12.041 ******* 2026-01-13 00:49:30.441293 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.441299 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.441305 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.441311 | orchestrator | 2026-01-13 00:49:30.441317 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-13 00:49:30.441322 | orchestrator | Tuesday 13 January 2026 00:48:27 +0000 (0:00:00.337) 0:01:12.379 ******* 2026-01-13 00:49:30.441328 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.441335 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.441361 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.441367 | orchestrator | 2026-01-13 00:49:30.441373 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-13 00:49:30.441379 | orchestrator | Tuesday 13 January 2026 00:48:28 +0000 (0:00:00.330) 0:01:12.710 ******* 2026-01-13 00:49:30.441385 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.441391 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.441397 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.441403 | orchestrator | 2026-01-13 00:49:30.441414 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-13 00:49:30.441420 | orchestrator | Tuesday 13 January 2026 00:48:28 +0000 (0:00:00.351) 0:01:13.062 ******* 2026-01-13 00:49:30.441425 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.441431 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.441437 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.441444 | orchestrator | 2026-01-13 00:49:30.441450 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-13 00:49:30.441456 | orchestrator | Tuesday 13 January 2026 00:48:29 +0000 (0:00:00.568) 0:01:13.631 ******* 2026-01-13 00:49:30.441462 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.441475 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.441481 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.441488 | orchestrator | 2026-01-13 00:49:30.441494 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-13 00:49:30.441500 | orchestrator | Tuesday 13 January 2026 00:48:29 +0000 (0:00:00.299) 0:01:13.930 ******* 2026-01-13 00:49:30.441508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441586 | orchestrator | 2026-01-13 00:49:30.441592 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-13 00:49:30.441598 | orchestrator | Tuesday 13 January 2026 00:48:30 +0000 (0:00:01.425) 0:01:15.355 ******* 2026-01-13 00:49:30.441604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441677 | orchestrator | 2026-01-13 00:49:30.441686 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-13 00:49:30.441691 | orchestrator | Tuesday 13 January 2026 00:48:35 +0000 (0:00:04.484) 0:01:19.839 ******* 2026-01-13 00:49:30.441700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.441759 | orchestrator | 2026-01-13 00:49:30.441770 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-13 00:49:30.441776 | orchestrator | Tuesday 13 January 2026 00:48:38 +0000 (0:00:03.094) 0:01:22.934 ******* 2026-01-13 00:49:30.441782 | orchestrator | 2026-01-13 00:49:30.441788 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-13 00:49:30.441793 | orchestrator | Tuesday 13 January 2026 00:48:38 +0000 (0:00:00.069) 0:01:23.003 ******* 2026-01-13 00:49:30.441799 | orchestrator | 2026-01-13 00:49:30.441805 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-13 00:49:30.441811 | orchestrator | Tuesday 13 January 2026 00:48:38 +0000 (0:00:00.063) 0:01:23.067 ******* 2026-01-13 00:49:30.441817 | orchestrator | 2026-01-13 00:49:30.441822 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-13 00:49:30.441829 | orchestrator | Tuesday 13 January 2026 00:48:38 +0000 (0:00:00.067) 0:01:23.134 ******* 2026-01-13 00:49:30.441838 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:49:30.441844 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:49:30.441851 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:49:30.441856 | orchestrator | 2026-01-13 00:49:30.441862 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-13 00:49:30.441868 | orchestrator | Tuesday 13 January 2026 00:48:41 +0000 (0:00:02.377) 0:01:25.512 ******* 2026-01-13 00:49:30.441873 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:49:30.441879 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:49:30.441885 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:49:30.441891 | orchestrator | 2026-01-13 00:49:30.441896 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-13 00:49:30.441902 | orchestrator | Tuesday 13 January 2026 00:48:43 +0000 (0:00:02.393) 0:01:27.905 ******* 2026-01-13 00:49:30.441908 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:49:30.441914 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:49:30.441920 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:49:30.441925 | orchestrator | 2026-01-13 00:49:30.441931 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-13 00:49:30.441937 | orchestrator | Tuesday 13 January 2026 00:48:51 +0000 (0:00:07.858) 0:01:35.764 ******* 2026-01-13 00:49:30.441943 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.441949 | orchestrator | 2026-01-13 00:49:30.441954 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-13 00:49:30.441960 | orchestrator | Tuesday 13 January 2026 00:48:51 +0000 (0:00:00.131) 0:01:35.895 ******* 2026-01-13 00:49:30.441965 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.441972 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.441978 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.441984 | orchestrator | 2026-01-13 00:49:30.441990 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-13 00:49:30.441996 | orchestrator | Tuesday 13 January 2026 00:48:52 +0000 (0:00:00.709) 0:01:36.604 ******* 2026-01-13 00:49:30.442003 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.442009 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.442071 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:49:30.442079 | orchestrator | 2026-01-13 00:49:30.442086 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-13 00:49:30.442093 | orchestrator | Tuesday 13 January 2026 00:48:52 +0000 (0:00:00.524) 0:01:37.129 ******* 2026-01-13 00:49:30.442100 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.442107 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.442114 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.442120 | orchestrator | 2026-01-13 00:49:30.442127 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-13 00:49:30.442133 | orchestrator | Tuesday 13 January 2026 00:48:53 +0000 (0:00:00.687) 0:01:37.816 ******* 2026-01-13 00:49:30.442140 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.442147 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.442154 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:49:30.442168 | orchestrator | 2026-01-13 00:49:30.442175 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-13 00:49:30.442182 | orchestrator | Tuesday 13 January 2026 00:48:54 +0000 (0:00:00.828) 0:01:38.644 ******* 2026-01-13 00:49:30.442189 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.442196 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.442210 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.442217 | orchestrator | 2026-01-13 00:49:30.442224 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-13 00:49:30.442230 | orchestrator | Tuesday 13 January 2026 00:48:54 +0000 (0:00:00.792) 0:01:39.437 ******* 2026-01-13 00:49:30.442237 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.442243 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.442250 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.442256 | orchestrator | 2026-01-13 00:49:30.442263 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-13 00:49:30.442270 | orchestrator | Tuesday 13 January 2026 00:48:55 +0000 (0:00:00.857) 0:01:40.295 ******* 2026-01-13 00:49:30.442278 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.442284 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.442291 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.442298 | orchestrator | 2026-01-13 00:49:30.442304 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-13 00:49:30.442310 | orchestrator | Tuesday 13 January 2026 00:48:56 +0000 (0:00:00.309) 0:01:40.604 ******* 2026-01-13 00:49:30.442318 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442327 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442334 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442440 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442452 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442459 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442465 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442479 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442496 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442504 | orchestrator | 2026-01-13 00:49:30.442510 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-13 00:49:30.442517 | orchestrator | Tuesday 13 January 2026 00:48:57 +0000 (0:00:01.710) 0:01:42.314 ******* 2026-01-13 00:49:30.442524 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442530 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442536 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442543 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442567 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442591 | orchestrator | 2026-01-13 00:49:30.442597 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-13 00:49:30.442603 | orchestrator | Tuesday 13 January 2026 00:49:02 +0000 (0:00:04.360) 0:01:46.675 ******* 2026-01-13 00:49:30.442615 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442621 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442628 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442641 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442681 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 00:49:30.442687 | orchestrator | 2026-01-13 00:49:30.442693 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-13 00:49:30.442700 | orchestrator | Tuesday 13 January 2026 00:49:05 +0000 (0:00:02.928) 0:01:49.604 ******* 2026-01-13 00:49:30.442706 | orchestrator | 2026-01-13 00:49:30.442712 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-13 00:49:30.442718 | orchestrator | Tuesday 13 January 2026 00:49:05 +0000 (0:00:00.060) 0:01:49.665 ******* 2026-01-13 00:49:30.442725 | orchestrator | 2026-01-13 00:49:30.442731 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-13 00:49:30.442737 | orchestrator | Tuesday 13 January 2026 00:49:05 +0000 (0:00:00.065) 0:01:49.731 ******* 2026-01-13 00:49:30.442743 | orchestrator | 2026-01-13 00:49:30.442750 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-13 00:49:30.442757 | orchestrator | Tuesday 13 January 2026 00:49:05 +0000 (0:00:00.063) 0:01:49.794 ******* 2026-01-13 00:49:30.442764 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:49:30.442770 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:49:30.442777 | orchestrator | 2026-01-13 00:49:30.442787 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-13 00:49:30.442793 | orchestrator | Tuesday 13 January 2026 00:49:11 +0000 (0:00:06.304) 0:01:56.098 ******* 2026-01-13 00:49:30.442800 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:49:30.442806 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:49:30.442812 | orchestrator | 2026-01-13 00:49:30.442818 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-13 00:49:30.442824 | orchestrator | Tuesday 13 January 2026 00:49:17 +0000 (0:00:06.211) 0:02:02.310 ******* 2026-01-13 00:49:30.442830 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:49:30.442836 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:49:30.442843 | orchestrator | 2026-01-13 00:49:30.442849 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-13 00:49:30.442856 | orchestrator | Tuesday 13 January 2026 00:49:24 +0000 (0:00:06.433) 0:02:08.743 ******* 2026-01-13 00:49:30.442863 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:49:30.442869 | orchestrator | 2026-01-13 00:49:30.442875 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-13 00:49:30.442881 | orchestrator | Tuesday 13 January 2026 00:49:24 +0000 (0:00:00.144) 0:02:08.887 ******* 2026-01-13 00:49:30.442888 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.442894 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.442900 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.442906 | orchestrator | 2026-01-13 00:49:30.442913 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-13 00:49:30.442919 | orchestrator | Tuesday 13 January 2026 00:49:25 +0000 (0:00:00.893) 0:02:09.781 ******* 2026-01-13 00:49:30.442925 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.442931 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.442942 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:49:30.442949 | orchestrator | 2026-01-13 00:49:30.442955 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-13 00:49:30.442962 | orchestrator | Tuesday 13 January 2026 00:49:25 +0000 (0:00:00.663) 0:02:10.444 ******* 2026-01-13 00:49:30.442969 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.442975 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.442982 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.442988 | orchestrator | 2026-01-13 00:49:30.442995 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-13 00:49:30.443001 | orchestrator | Tuesday 13 January 2026 00:49:26 +0000 (0:00:00.788) 0:02:11.233 ******* 2026-01-13 00:49:30.443007 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:49:30.443013 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:49:30.443019 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:49:30.443026 | orchestrator | 2026-01-13 00:49:30.443033 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-13 00:49:30.443039 | orchestrator | Tuesday 13 January 2026 00:49:27 +0000 (0:00:00.612) 0:02:11.845 ******* 2026-01-13 00:49:30.443045 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.443052 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.443062 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.443068 | orchestrator | 2026-01-13 00:49:30.443075 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-13 00:49:30.443081 | orchestrator | Tuesday 13 January 2026 00:49:28 +0000 (0:00:00.787) 0:02:12.633 ******* 2026-01-13 00:49:30.443087 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:49:30.443093 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:49:30.443099 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:49:30.443105 | orchestrator | 2026-01-13 00:49:30.443111 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:49:30.443118 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-13 00:49:30.443379 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-13 00:49:30.443394 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-13 00:49:30.443401 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:49:30.443408 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:49:30.443414 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:49:30.443420 | orchestrator | 2026-01-13 00:49:30.443426 | orchestrator | 2026-01-13 00:49:30.443433 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:49:30.443439 | orchestrator | Tuesday 13 January 2026 00:49:29 +0000 (0:00:00.910) 0:02:13.543 ******* 2026-01-13 00:49:30.443445 | orchestrator | =============================================================================== 2026-01-13 00:49:30.443451 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 28.70s 2026-01-13 00:49:30.443457 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.84s 2026-01-13 00:49:30.443463 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.29s 2026-01-13 00:49:30.443470 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.68s 2026-01-13 00:49:30.443476 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.60s 2026-01-13 00:49:30.443483 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.48s 2026-01-13 00:49:30.443496 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.36s 2026-01-13 00:49:30.443507 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.09s 2026-01-13 00:49:30.443514 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.93s 2026-01-13 00:49:30.443520 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.70s 2026-01-13 00:49:30.443526 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.04s 2026-01-13 00:49:30.443532 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.02s 2026-01-13 00:49:30.443538 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.86s 2026-01-13 00:49:30.443544 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.71s 2026-01-13 00:49:30.443550 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.46s 2026-01-13 00:49:30.443556 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2026-01-13 00:49:30.443563 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.19s 2026-01-13 00:49:30.443569 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 0.94s 2026-01-13 00:49:30.443576 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 0.94s 2026-01-13 00:49:30.443582 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 0.93s 2026-01-13 00:49:30.443588 | orchestrator | 2026-01-13 00:49:30 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:30.443595 | orchestrator | 2026-01-13 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:33.482001 | orchestrator | 2026-01-13 00:49:33 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:33.482904 | orchestrator | 2026-01-13 00:49:33 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:33.482939 | orchestrator | 2026-01-13 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:36.526558 | orchestrator | 2026-01-13 00:49:36 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:36.527428 | orchestrator | 2026-01-13 00:49:36 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:36.527457 | orchestrator | 2026-01-13 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:39.574763 | orchestrator | 2026-01-13 00:49:39 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:39.576655 | orchestrator | 2026-01-13 00:49:39 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:39.576989 | orchestrator | 2026-01-13 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:42.620818 | orchestrator | 2026-01-13 00:49:42 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:42.621488 | orchestrator | 2026-01-13 00:49:42 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:42.621771 | orchestrator | 2026-01-13 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:45.671085 | orchestrator | 2026-01-13 00:49:45 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:45.671949 | orchestrator | 2026-01-13 00:49:45 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:45.671972 | orchestrator | 2026-01-13 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:48.718588 | orchestrator | 2026-01-13 00:49:48 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:48.720436 | orchestrator | 2026-01-13 00:49:48 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:48.720478 | orchestrator | 2026-01-13 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:51.765823 | orchestrator | 2026-01-13 00:49:51 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:51.767887 | orchestrator | 2026-01-13 00:49:51 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:51.767947 | orchestrator | 2026-01-13 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:54.809857 | orchestrator | 2026-01-13 00:49:54 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:54.812388 | orchestrator | 2026-01-13 00:49:54 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:54.812442 | orchestrator | 2026-01-13 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:49:57.866197 | orchestrator | 2026-01-13 00:49:57 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:49:57.866269 | orchestrator | 2026-01-13 00:49:57 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:49:57.866281 | orchestrator | 2026-01-13 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:00.901768 | orchestrator | 2026-01-13 00:50:00 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:00.902087 | orchestrator | 2026-01-13 00:50:00 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:00.902144 | orchestrator | 2026-01-13 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:03.957253 | orchestrator | 2026-01-13 00:50:03 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:03.960715 | orchestrator | 2026-01-13 00:50:03 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:03.960797 | orchestrator | 2026-01-13 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:07.003539 | orchestrator | 2026-01-13 00:50:07 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:07.006061 | orchestrator | 2026-01-13 00:50:07 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:07.006117 | orchestrator | 2026-01-13 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:10.051368 | orchestrator | 2026-01-13 00:50:10 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:10.054521 | orchestrator | 2026-01-13 00:50:10 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:10.054645 | orchestrator | 2026-01-13 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:13.099946 | orchestrator | 2026-01-13 00:50:13 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:13.100981 | orchestrator | 2026-01-13 00:50:13 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:13.101235 | orchestrator | 2026-01-13 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:16.149508 | orchestrator | 2026-01-13 00:50:16 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:16.151693 | orchestrator | 2026-01-13 00:50:16 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:16.151753 | orchestrator | 2026-01-13 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:19.182691 | orchestrator | 2026-01-13 00:50:19 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:19.184601 | orchestrator | 2026-01-13 00:50:19 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:19.184650 | orchestrator | 2026-01-13 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:22.232311 | orchestrator | 2026-01-13 00:50:22 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:22.232357 | orchestrator | 2026-01-13 00:50:22 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:22.232363 | orchestrator | 2026-01-13 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:25.281184 | orchestrator | 2026-01-13 00:50:25 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:25.283093 | orchestrator | 2026-01-13 00:50:25 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:25.283144 | orchestrator | 2026-01-13 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:28.324832 | orchestrator | 2026-01-13 00:50:28 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:28.327449 | orchestrator | 2026-01-13 00:50:28 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:28.327960 | orchestrator | 2026-01-13 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:31.369632 | orchestrator | 2026-01-13 00:50:31 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:31.371367 | orchestrator | 2026-01-13 00:50:31 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:31.371426 | orchestrator | 2026-01-13 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:34.407738 | orchestrator | 2026-01-13 00:50:34 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:34.407926 | orchestrator | 2026-01-13 00:50:34 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:34.407941 | orchestrator | 2026-01-13 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:37.448237 | orchestrator | 2026-01-13 00:50:37 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:37.448901 | orchestrator | 2026-01-13 00:50:37 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:37.448966 | orchestrator | 2026-01-13 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:40.496524 | orchestrator | 2026-01-13 00:50:40 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:40.501392 | orchestrator | 2026-01-13 00:50:40 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:40.502449 | orchestrator | 2026-01-13 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:43.540601 | orchestrator | 2026-01-13 00:50:43 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:43.541465 | orchestrator | 2026-01-13 00:50:43 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:43.541528 | orchestrator | 2026-01-13 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:46.609653 | orchestrator | 2026-01-13 00:50:46 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:46.611497 | orchestrator | 2026-01-13 00:50:46 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:46.611583 | orchestrator | 2026-01-13 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:49.659812 | orchestrator | 2026-01-13 00:50:49 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:49.662957 | orchestrator | 2026-01-13 00:50:49 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:49.663019 | orchestrator | 2026-01-13 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:52.700443 | orchestrator | 2026-01-13 00:50:52 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:52.700807 | orchestrator | 2026-01-13 00:50:52 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:52.701055 | orchestrator | 2026-01-13 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:55.746164 | orchestrator | 2026-01-13 00:50:55 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:55.746943 | orchestrator | 2026-01-13 00:50:55 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:55.747364 | orchestrator | 2026-01-13 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:50:58.795611 | orchestrator | 2026-01-13 00:50:58 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:50:58.797179 | orchestrator | 2026-01-13 00:50:58 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:50:58.797285 | orchestrator | 2026-01-13 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:01.847270 | orchestrator | 2026-01-13 00:51:01 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:01.852242 | orchestrator | 2026-01-13 00:51:01 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:01.852300 | orchestrator | 2026-01-13 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:04.895585 | orchestrator | 2026-01-13 00:51:04 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:04.896796 | orchestrator | 2026-01-13 00:51:04 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:04.896835 | orchestrator | 2026-01-13 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:07.949307 | orchestrator | 2026-01-13 00:51:07 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:07.949411 | orchestrator | 2026-01-13 00:51:07 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:07.949422 | orchestrator | 2026-01-13 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:11.005927 | orchestrator | 2026-01-13 00:51:11 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:11.008356 | orchestrator | 2026-01-13 00:51:11 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:11.008423 | orchestrator | 2026-01-13 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:14.060747 | orchestrator | 2026-01-13 00:51:14 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:14.062621 | orchestrator | 2026-01-13 00:51:14 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:14.062782 | orchestrator | 2026-01-13 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:17.112743 | orchestrator | 2026-01-13 00:51:17 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:17.113444 | orchestrator | 2026-01-13 00:51:17 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:17.113491 | orchestrator | 2026-01-13 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:20.160468 | orchestrator | 2026-01-13 00:51:20 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:20.162276 | orchestrator | 2026-01-13 00:51:20 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:20.162416 | orchestrator | 2026-01-13 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:23.213458 | orchestrator | 2026-01-13 00:51:23 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:23.214338 | orchestrator | 2026-01-13 00:51:23 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:23.214367 | orchestrator | 2026-01-13 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:26.260323 | orchestrator | 2026-01-13 00:51:26 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:26.261760 | orchestrator | 2026-01-13 00:51:26 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:26.261820 | orchestrator | 2026-01-13 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:29.310901 | orchestrator | 2026-01-13 00:51:29 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:29.312564 | orchestrator | 2026-01-13 00:51:29 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:29.312782 | orchestrator | 2026-01-13 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:32.360856 | orchestrator | 2026-01-13 00:51:32 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:32.361495 | orchestrator | 2026-01-13 00:51:32 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:32.361533 | orchestrator | 2026-01-13 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:35.416085 | orchestrator | 2026-01-13 00:51:35 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:35.416742 | orchestrator | 2026-01-13 00:51:35 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:35.416782 | orchestrator | 2026-01-13 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:38.466418 | orchestrator | 2026-01-13 00:51:38 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:38.467666 | orchestrator | 2026-01-13 00:51:38 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:38.467898 | orchestrator | 2026-01-13 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:41.518712 | orchestrator | 2026-01-13 00:51:41 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:41.519311 | orchestrator | 2026-01-13 00:51:41 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:41.519671 | orchestrator | 2026-01-13 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:44.559408 | orchestrator | 2026-01-13 00:51:44 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:44.561398 | orchestrator | 2026-01-13 00:51:44 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:44.561466 | orchestrator | 2026-01-13 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:47.623998 | orchestrator | 2026-01-13 00:51:47 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:47.624170 | orchestrator | 2026-01-13 00:51:47 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:47.624187 | orchestrator | 2026-01-13 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:50.672481 | orchestrator | 2026-01-13 00:51:50 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:50.676387 | orchestrator | 2026-01-13 00:51:50 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:50.676471 | orchestrator | 2026-01-13 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:53.753332 | orchestrator | 2026-01-13 00:51:53 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:53.753762 | orchestrator | 2026-01-13 00:51:53 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:53.753785 | orchestrator | 2026-01-13 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:56.798843 | orchestrator | 2026-01-13 00:51:56 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:56.799900 | orchestrator | 2026-01-13 00:51:56 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:56.799923 | orchestrator | 2026-01-13 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:51:59.859399 | orchestrator | 2026-01-13 00:51:59 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:51:59.859918 | orchestrator | 2026-01-13 00:51:59 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:51:59.859949 | orchestrator | 2026-01-13 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:02.913296 | orchestrator | 2026-01-13 00:52:02 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:52:02.915525 | orchestrator | 2026-01-13 00:52:02 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:02.915571 | orchestrator | 2026-01-13 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:05.960816 | orchestrator | 2026-01-13 00:52:05 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:52:05.963112 | orchestrator | 2026-01-13 00:52:05 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:05.963203 | orchestrator | 2026-01-13 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:09.010786 | orchestrator | 2026-01-13 00:52:09 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:52:09.013518 | orchestrator | 2026-01-13 00:52:09 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:09.013569 | orchestrator | 2026-01-13 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:12.068505 | orchestrator | 2026-01-13 00:52:12 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:52:12.071739 | orchestrator | 2026-01-13 00:52:12 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:12.071839 | orchestrator | 2026-01-13 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:15.122219 | orchestrator | 2026-01-13 00:52:15 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:52:15.124217 | orchestrator | 2026-01-13 00:52:15 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:15.124271 | orchestrator | 2026-01-13 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:18.172588 | orchestrator | 2026-01-13 00:52:18 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:52:18.172646 | orchestrator | 2026-01-13 00:52:18 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:18.172655 | orchestrator | 2026-01-13 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:21.224822 | orchestrator | 2026-01-13 00:52:21 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:52:21.227133 | orchestrator | 2026-01-13 00:52:21 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:21.227189 | orchestrator | 2026-01-13 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:24.282804 | orchestrator | 2026-01-13 00:52:24 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state STARTED 2026-01-13 00:52:24.286971 | orchestrator | 2026-01-13 00:52:24 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:24.287069 | orchestrator | 2026-01-13 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:27.356696 | orchestrator | 2026-01-13 00:52:27 | INFO  | Task b89ee725-defa-4e6f-a85b-fcd8cc331623 is in state SUCCESS 2026-01-13 00:52:27.356975 | orchestrator | 2026-01-13 00:52:27.358819 | orchestrator | 2026-01-13 00:52:27.358890 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 00:52:27.358902 | orchestrator | 2026-01-13 00:52:27.358909 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 00:52:27.358917 | orchestrator | Tuesday 13 January 2026 00:46:08 +0000 (0:00:00.246) 0:00:00.246 ******* 2026-01-13 00:52:27.358924 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.358931 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.359045 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.359056 | orchestrator | 2026-01-13 00:52:27.359063 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 00:52:27.359070 | orchestrator | Tuesday 13 January 2026 00:46:08 +0000 (0:00:00.276) 0:00:00.522 ******* 2026-01-13 00:52:27.359122 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-13 00:52:27.359129 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-13 00:52:27.359136 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-13 00:52:27.359143 | orchestrator | 2026-01-13 00:52:27.359150 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-13 00:52:27.359157 | orchestrator | 2026-01-13 00:52:27.359163 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-13 00:52:27.359169 | orchestrator | Tuesday 13 January 2026 00:46:09 +0000 (0:00:00.376) 0:00:00.898 ******* 2026-01-13 00:52:27.359177 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.359183 | orchestrator | 2026-01-13 00:52:27.359189 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-13 00:52:27.359196 | orchestrator | Tuesday 13 January 2026 00:46:09 +0000 (0:00:00.528) 0:00:01.427 ******* 2026-01-13 00:52:27.359202 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.359208 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.359215 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.359221 | orchestrator | 2026-01-13 00:52:27.359227 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-13 00:52:27.359234 | orchestrator | Tuesday 13 January 2026 00:46:10 +0000 (0:00:00.639) 0:00:02.066 ******* 2026-01-13 00:52:27.359240 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.359247 | orchestrator | 2026-01-13 00:52:27.359253 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-13 00:52:27.359281 | orchestrator | Tuesday 13 January 2026 00:46:11 +0000 (0:00:00.886) 0:00:02.952 ******* 2026-01-13 00:52:27.359287 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.359293 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.359299 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.359306 | orchestrator | 2026-01-13 00:52:27.359312 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-13 00:52:27.359331 | orchestrator | Tuesday 13 January 2026 00:46:11 +0000 (0:00:00.621) 0:00:03.574 ******* 2026-01-13 00:52:27.359338 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-13 00:52:27.359345 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-13 00:52:27.359351 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-13 00:52:27.359358 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-13 00:52:27.359364 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-13 00:52:27.359369 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-13 00:52:27.359377 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-13 00:52:27.359382 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-13 00:52:27.359388 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-13 00:52:27.359394 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-13 00:52:27.359719 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-13 00:52:27.359748 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-13 00:52:27.359755 | orchestrator | 2026-01-13 00:52:27.359761 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-13 00:52:27.359768 | orchestrator | Tuesday 13 January 2026 00:46:15 +0000 (0:00:03.190) 0:00:06.765 ******* 2026-01-13 00:52:27.359775 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-13 00:52:27.359782 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-13 00:52:27.359791 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-13 00:52:27.359798 | orchestrator | 2026-01-13 00:52:27.359805 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-13 00:52:27.359811 | orchestrator | Tuesday 13 January 2026 00:46:15 +0000 (0:00:00.707) 0:00:07.472 ******* 2026-01-13 00:52:27.359818 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-13 00:52:27.359825 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-13 00:52:27.359832 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-13 00:52:27.359838 | orchestrator | 2026-01-13 00:52:27.359845 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-13 00:52:27.359851 | orchestrator | Tuesday 13 January 2026 00:46:17 +0000 (0:00:01.591) 0:00:09.063 ******* 2026-01-13 00:52:27.359858 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-13 00:52:27.359864 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.359885 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-13 00:52:27.359892 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.359899 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-13 00:52:27.359905 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.359911 | orchestrator | 2026-01-13 00:52:27.359918 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-13 00:52:27.359924 | orchestrator | Tuesday 13 January 2026 00:46:18 +0000 (0:00:00.923) 0:00:09.987 ******* 2026-01-13 00:52:27.359935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.360066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.360258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.360268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.360275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.360302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.360396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-13 00:52:27.360418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-13 00:52:27.360426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-13 00:52:27.360433 | orchestrator | 2026-01-13 00:52:27.360440 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-13 00:52:27.360447 | orchestrator | Tuesday 13 January 2026 00:46:20 +0000 (0:00:02.258) 0:00:12.246 ******* 2026-01-13 00:52:27.360453 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.360460 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.360466 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.360472 | orchestrator | 2026-01-13 00:52:27.360478 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-13 00:52:27.360490 | orchestrator | Tuesday 13 January 2026 00:46:21 +0000 (0:00:01.046) 0:00:13.292 ******* 2026-01-13 00:52:27.360497 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-13 00:52:27.360503 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-13 00:52:27.360533 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-13 00:52:27.360539 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-13 00:52:27.360545 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-13 00:52:27.360550 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-13 00:52:27.360556 | orchestrator | 2026-01-13 00:52:27.360562 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-13 00:52:27.360569 | orchestrator | Tuesday 13 January 2026 00:46:23 +0000 (0:00:02.216) 0:00:15.509 ******* 2026-01-13 00:52:27.360576 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.360582 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.360589 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.360595 | orchestrator | 2026-01-13 00:52:27.360601 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-13 00:52:27.360608 | orchestrator | Tuesday 13 January 2026 00:46:25 +0000 (0:00:01.379) 0:00:16.889 ******* 2026-01-13 00:52:27.360658 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.360666 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.360673 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.360679 | orchestrator | 2026-01-13 00:52:27.360686 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-13 00:52:27.360692 | orchestrator | Tuesday 13 January 2026 00:46:27 +0000 (0:00:01.835) 0:00:18.724 ******* 2026-01-13 00:52:27.360699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.360993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.361151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.361168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__407d0674c1af89b0aa1d143b0a55ff06f98c36b0', '__omit_place_holder__407d0674c1af89b0aa1d143b0a55ff06f98c36b0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-13 00:52:27.361176 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.361188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.361196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.361203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.361220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__407d0674c1af89b0aa1d143b0a55ff06f98c36b0', '__omit_place_holder__407d0674c1af89b0aa1d143b0a55ff06f98c36b0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-13 00:52:27.361227 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.361257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.361264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.361271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.361281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__407d0674c1af89b0aa1d143b0a55ff06f98c36b0', '__omit_place_holder__407d0674c1af89b0aa1d143b0a55ff06f98c36b0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-13 00:52:27.361288 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.361294 | orchestrator | 2026-01-13 00:52:27.361302 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-13 00:52:27.361308 | orchestrator | Tuesday 13 January 2026 00:46:29 +0000 (0:00:02.425) 0:00:21.150 ******* 2026-01-13 00:52:27.361315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.361330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.361443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.361457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.361463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.361764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__407d0674c1af89b0aa1d143b0a55ff06f98c36b0', '__omit_place_holder__407d0674c1af89b0aa1d143b0a55ff06f98c36b0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-13 00:52:27.361775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.361792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.361799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__407d0674c1af89b0aa1d143b0a55ff06f98c36b0', '__omit_place_holder__407d0674c1af89b0aa1d143b0a55ff06f98c36b0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-13 00:52:27.361825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.361833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.361839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__407d0674c1af89b0aa1d143b0a55ff06f98c36b0', '__omit_place_holder__407d0674c1af89b0aa1d143b0a55ff06f98c36b0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-13 00:52:27.361846 | orchestrator | 2026-01-13 00:52:27.361854 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-13 00:52:27.361861 | orchestrator | Tuesday 13 January 2026 00:46:32 +0000 (0:00:02.965) 0:00:24.115 ******* 2026-01-13 00:52:27.361907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.361922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.361928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.361952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.361960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.361966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.361977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-13 00:52:27.361984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-13 00:52:27.361995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-13 00:52:27.362001 | orchestrator | 2026-01-13 00:52:27.362007 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-13 00:52:27.362110 | orchestrator | Tuesday 13 January 2026 00:46:35 +0000 (0:00:03.274) 0:00:27.390 ******* 2026-01-13 00:52:27.362119 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-13 00:52:27.362126 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-13 00:52:27.362132 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-13 00:52:27.362138 | orchestrator | 2026-01-13 00:52:27.362144 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-13 00:52:27.362150 | orchestrator | Tuesday 13 January 2026 00:46:38 +0000 (0:00:02.272) 0:00:29.663 ******* 2026-01-13 00:52:27.362156 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-13 00:52:27.362561 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-13 00:52:27.362592 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-13 00:52:27.362598 | orchestrator | 2026-01-13 00:52:27.362631 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-13 00:52:27.362638 | orchestrator | Tuesday 13 January 2026 00:46:42 +0000 (0:00:04.447) 0:00:34.111 ******* 2026-01-13 00:52:27.362645 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.362651 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.362657 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.362663 | orchestrator | 2026-01-13 00:52:27.362669 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-13 00:52:27.362676 | orchestrator | Tuesday 13 January 2026 00:46:43 +0000 (0:00:00.516) 0:00:34.628 ******* 2026-01-13 00:52:27.362682 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-13 00:52:27.362690 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-13 00:52:27.362697 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-13 00:52:27.362707 | orchestrator | 2026-01-13 00:52:27.362715 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-13 00:52:27.362721 | orchestrator | Tuesday 13 January 2026 00:46:45 +0000 (0:00:02.745) 0:00:37.373 ******* 2026-01-13 00:52:27.362728 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-13 00:52:27.362735 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-13 00:52:27.362742 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-13 00:52:27.362758 | orchestrator | 2026-01-13 00:52:27.362764 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-13 00:52:27.362771 | orchestrator | Tuesday 13 January 2026 00:46:48 +0000 (0:00:02.891) 0:00:40.264 ******* 2026-01-13 00:52:27.362779 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-13 00:52:27.362787 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-13 00:52:27.362800 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-13 00:52:27.362806 | orchestrator | 2026-01-13 00:52:27.362811 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-13 00:52:27.362817 | orchestrator | Tuesday 13 January 2026 00:46:50 +0000 (0:00:01.654) 0:00:41.919 ******* 2026-01-13 00:52:27.362823 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-13 00:52:27.362834 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-13 00:52:27.362839 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-13 00:52:27.362845 | orchestrator | 2026-01-13 00:52:27.362851 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-13 00:52:27.362856 | orchestrator | Tuesday 13 January 2026 00:46:52 +0000 (0:00:01.865) 0:00:43.785 ******* 2026-01-13 00:52:27.362921 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.363434 | orchestrator | 2026-01-13 00:52:27.363454 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-13 00:52:27.363460 | orchestrator | Tuesday 13 January 2026 00:46:53 +0000 (0:00:01.120) 0:00:44.905 ******* 2026-01-13 00:52:27.363469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.363478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.363545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.363556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.363573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.363586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.363592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-13 00:52:27.363599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-13 00:52:27.363605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-13 00:52:27.363612 | orchestrator | 2026-01-13 00:52:27.363618 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-13 00:52:27.363624 | orchestrator | Tuesday 13 January 2026 00:46:57 +0000 (0:00:03.844) 0:00:48.749 ******* 2026-01-13 00:52:27.363907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.363970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.363979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.363987 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.364000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.364008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.364149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.364159 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.364167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.364224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.364243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.364251 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.364258 | orchestrator | 2026-01-13 00:52:27.364264 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-13 00:52:27.364271 | orchestrator | Tuesday 13 January 2026 00:46:57 +0000 (0:00:00.678) 0:00:49.428 ******* 2026-01-13 00:52:27.364283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.364518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.364543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.364550 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.364557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.364660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.364681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.364688 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.364695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.364702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.364710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.364717 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.364764 | orchestrator | 2026-01-13 00:52:27.364773 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-13 00:52:27.364780 | orchestrator | Tuesday 13 January 2026 00:46:58 +0000 (0:00:00.904) 0:00:50.332 ******* 2026-01-13 00:52:27.364815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.364875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.364893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.364900 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.364906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.364912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.364946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.364953 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.364959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.364965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.365649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.365674 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.365682 | orchestrator | 2026-01-13 00:52:27.365689 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-13 00:52:27.365697 | orchestrator | Tuesday 13 January 2026 00:47:00 +0000 (0:00:01.622) 0:00:51.954 ******* 2026-01-13 00:52:27.365703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.365711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.365722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.365729 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.365736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.365743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.365757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.365763 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.365819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.365828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.365835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.365841 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.365847 | orchestrator | 2026-01-13 00:52:27.365854 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-13 00:52:27.365860 | orchestrator | Tuesday 13 January 2026 00:47:00 +0000 (0:00:00.547) 0:00:52.502 ******* 2026-01-13 00:52:27.365870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.365877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.365892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.365899 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.365982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.365993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.366000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.366007 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.366178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.366187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.366426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.366446 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.366454 | orchestrator | 2026-01-13 00:52:27.366461 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-13 00:52:27.366467 | orchestrator | Tuesday 13 January 2026 00:47:01 +0000 (0:00:00.874) 0:00:53.376 ******* 2026-01-13 00:52:27.366474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.366547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.366558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.366564 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.366571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.366585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.366600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.366607 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.366614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.366669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.366678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.366685 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.366692 | orchestrator | 2026-01-13 00:52:27.366752 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-13 00:52:27.366759 | orchestrator | Tuesday 13 January 2026 00:47:03 +0000 (0:00:01.217) 0:00:54.594 ******* 2026-01-13 00:52:27.367072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.367093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.367109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.367116 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.367122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.367129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.367211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.367221 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.367227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.367233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.367314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.367325 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.367332 | orchestrator | 2026-01-13 00:52:27.367338 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-13 00:52:27.367345 | orchestrator | Tuesday 13 January 2026 00:47:04 +0000 (0:00:01.146) 0:00:55.740 ******* 2026-01-13 00:52:27.367352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.367359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.367367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.367373 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.367437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.367449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.367463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.367469 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.367493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-13 00:52:27.367499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-13 00:52:27.367506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-13 00:52:27.367511 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.367516 | orchestrator | 2026-01-13 00:52:27.367522 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-13 00:52:27.367528 | orchestrator | Tuesday 13 January 2026 00:47:04 +0000 (0:00:00.792) 0:00:56.533 ******* 2026-01-13 00:52:27.367534 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-13 00:52:27.367541 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-13 00:52:27.368823 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-13 00:52:27.368863 | orchestrator | 2026-01-13 00:52:27.368871 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-13 00:52:27.368877 | orchestrator | Tuesday 13 January 2026 00:47:06 +0000 (0:00:01.731) 0:00:58.265 ******* 2026-01-13 00:52:27.368884 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-13 00:52:27.368891 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-13 00:52:27.368899 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-13 00:52:27.368905 | orchestrator | 2026-01-13 00:52:27.368911 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-13 00:52:27.368917 | orchestrator | Tuesday 13 January 2026 00:47:08 +0000 (0:00:01.417) 0:00:59.682 ******* 2026-01-13 00:52:27.368933 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-13 00:52:27.368940 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-13 00:52:27.368946 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-13 00:52:27.368953 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-13 00:52:27.368960 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.368967 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-13 00:52:27.368972 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.368978 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-13 00:52:27.368983 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.368988 | orchestrator | 2026-01-13 00:52:27.368994 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-13 00:52:27.369000 | orchestrator | Tuesday 13 January 2026 00:47:08 +0000 (0:00:00.809) 0:01:00.491 ******* 2026-01-13 00:52:27.369115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.369126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.369133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-13 00:52:27.369953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.370900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.370952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-13 00:52:27.370961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-13 00:52:27.370973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-13 00:52:27.370980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-13 00:52:27.370987 | orchestrator | 2026-01-13 00:52:27.370994 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-13 00:52:27.371002 | orchestrator | Tuesday 13 January 2026 00:47:11 +0000 (0:00:02.483) 0:01:02.975 ******* 2026-01-13 00:52:27.371056 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.371065 | orchestrator | 2026-01-13 00:52:27.371071 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-13 00:52:27.371078 | orchestrator | Tuesday 13 January 2026 00:47:11 +0000 (0:00:00.523) 0:01:03.499 ******* 2026-01-13 00:52:27.371090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-13 00:52:27.371124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.371132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-13 00:52:27.371162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.371168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-13 00:52:27.371187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.371201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371226 | orchestrator | 2026-01-13 00:52:27.371232 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-13 00:52:27.371239 | orchestrator | Tuesday 13 January 2026 00:47:15 +0000 (0:00:04.074) 0:01:07.573 ******* 2026-01-13 00:52:27.371247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-13 00:52:27.371264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.371272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-13 00:52:27.371290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371297 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.371304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.371311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371330 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.371342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-13 00:52:27.371350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.371361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371375 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.371382 | orchestrator | 2026-01-13 00:52:27.371387 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-13 00:52:27.371394 | orchestrator | Tuesday 13 January 2026 00:47:16 +0000 (0:00:00.884) 0:01:08.458 ******* 2026-01-13 00:52:27.371400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-13 00:52:27.371412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-13 00:52:27.371420 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.371427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-13 00:52:27.371433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-13 00:52:27.371441 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.371448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-13 00:52:27.371455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-13 00:52:27.371461 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.371468 | orchestrator | 2026-01-13 00:52:27.371479 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-13 00:52:27.371486 | orchestrator | Tuesday 13 January 2026 00:47:17 +0000 (0:00:00.954) 0:01:09.413 ******* 2026-01-13 00:52:27.371493 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.371500 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.371507 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.371513 | orchestrator | 2026-01-13 00:52:27.371520 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-13 00:52:27.371528 | orchestrator | Tuesday 13 January 2026 00:47:18 +0000 (0:00:01.121) 0:01:10.534 ******* 2026-01-13 00:52:27.371535 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.371543 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.371551 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.371558 | orchestrator | 2026-01-13 00:52:27.371566 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-13 00:52:27.371574 | orchestrator | Tuesday 13 January 2026 00:47:20 +0000 (0:00:01.960) 0:01:12.495 ******* 2026-01-13 00:52:27.371582 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.371591 | orchestrator | 2026-01-13 00:52:27.371599 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-13 00:52:27.371607 | orchestrator | Tuesday 13 January 2026 00:47:21 +0000 (0:00:00.762) 0:01:13.257 ******* 2026-01-13 00:52:27.371615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.371627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.371660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.371705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371716 | orchestrator | 2026-01-13 00:52:27.371722 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-13 00:52:27.371728 | orchestrator | Tuesday 13 January 2026 00:47:26 +0000 (0:00:04.488) 0:01:17.745 ******* 2026-01-13 00:52:27.371739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.371745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371760 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.371771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.371783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371797 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.371809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.371817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.371837 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.371844 | orchestrator | 2026-01-13 00:52:27.371857 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-13 00:52:27.371864 | orchestrator | Tuesday 13 January 2026 00:47:26 +0000 (0:00:00.730) 0:01:18.476 ******* 2026-01-13 00:52:27.371871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-13 00:52:27.371880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-13 00:52:27.371887 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.371894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-13 00:52:27.371902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-13 00:52:27.371909 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.371916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-13 00:52:27.371923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-13 00:52:27.371930 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.371937 | orchestrator | 2026-01-13 00:52:27.371944 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-13 00:52:27.371951 | orchestrator | Tuesday 13 January 2026 00:47:27 +0000 (0:00:01.082) 0:01:19.559 ******* 2026-01-13 00:52:27.371959 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.371968 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.371976 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.371983 | orchestrator | 2026-01-13 00:52:27.371990 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-13 00:52:27.371997 | orchestrator | Tuesday 13 January 2026 00:47:29 +0000 (0:00:01.439) 0:01:20.999 ******* 2026-01-13 00:52:27.372004 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.372035 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.372042 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.372048 | orchestrator | 2026-01-13 00:52:27.372059 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-13 00:52:27.372066 | orchestrator | Tuesday 13 January 2026 00:47:32 +0000 (0:00:02.584) 0:01:23.583 ******* 2026-01-13 00:52:27.372072 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.372079 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.372086 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.372093 | orchestrator | 2026-01-13 00:52:27.372099 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-13 00:52:27.372106 | orchestrator | Tuesday 13 January 2026 00:47:32 +0000 (0:00:00.322) 0:01:23.906 ******* 2026-01-13 00:52:27.372113 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.372126 | orchestrator | 2026-01-13 00:52:27.372132 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-13 00:52:27.372139 | orchestrator | Tuesday 13 January 2026 00:47:33 +0000 (0:00:00.849) 0:01:24.755 ******* 2026-01-13 00:52:27.372146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-13 00:52:27.372155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-13 00:52:27.372162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-13 00:52:27.372170 | orchestrator | 2026-01-13 00:52:27.372176 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-13 00:52:27.372183 | orchestrator | Tuesday 13 January 2026 00:47:35 +0000 (0:00:02.571) 0:01:27.327 ******* 2026-01-13 00:52:27.372232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-13 00:52:27.372242 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.372255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-13 00:52:27.372262 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.372268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-13 00:52:27.372275 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.372282 | orchestrator | 2026-01-13 00:52:27.372291 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-13 00:52:27.372298 | orchestrator | Tuesday 13 January 2026 00:47:37 +0000 (0:00:01.563) 0:01:28.890 ******* 2026-01-13 00:52:27.372306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-13 00:52:27.372316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-13 00:52:27.372324 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.372331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-13 00:52:27.372338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-13 00:52:27.372345 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.372356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-13 00:52:27.372368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-13 00:52:27.372375 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.372382 | orchestrator | 2026-01-13 00:52:27.372388 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-13 00:52:27.372394 | orchestrator | Tuesday 13 January 2026 00:47:39 +0000 (0:00:02.151) 0:01:31.042 ******* 2026-01-13 00:52:27.372399 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.372405 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.372411 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.372418 | orchestrator | 2026-01-13 00:52:27.372425 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-13 00:52:27.372432 | orchestrator | Tuesday 13 January 2026 00:47:40 +0000 (0:00:00.876) 0:01:31.919 ******* 2026-01-13 00:52:27.372438 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.372445 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.372451 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.372458 | orchestrator | 2026-01-13 00:52:27.372466 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-13 00:52:27.372473 | orchestrator | Tuesday 13 January 2026 00:47:41 +0000 (0:00:01.256) 0:01:33.175 ******* 2026-01-13 00:52:27.372480 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.372487 | orchestrator | 2026-01-13 00:52:27.372493 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-13 00:52:27.372500 | orchestrator | Tuesday 13 January 2026 00:47:42 +0000 (0:00:00.762) 0:01:33.937 ******* 2026-01-13 00:52:27.372512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.372520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.372561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.372603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372627 | orchestrator | 2026-01-13 00:52:27.372634 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-13 00:52:27.372641 | orchestrator | Tuesday 13 January 2026 00:47:46 +0000 (0:00:04.160) 0:01:38.098 ******* 2026-01-13 00:52:27.372648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.372661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372688 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.372695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.372706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372732 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.372744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.372752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.372784 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.372792 | orchestrator | 2026-01-13 00:52:27.372799 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-13 00:52:27.372806 | orchestrator | Tuesday 13 January 2026 00:47:47 +0000 (0:00:01.041) 0:01:39.139 ******* 2026-01-13 00:52:27.372813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-13 00:52:27.372820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-13 00:52:27.372827 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.372834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-13 00:52:27.372842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-13 00:52:27.372849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-13 00:52:27.372856 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.372866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-13 00:52:27.372874 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.372881 | orchestrator | 2026-01-13 00:52:27.372887 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-13 00:52:27.372892 | orchestrator | Tuesday 13 January 2026 00:47:48 +0000 (0:00:01.073) 0:01:40.212 ******* 2026-01-13 00:52:27.372898 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.372904 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.372909 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.372915 | orchestrator | 2026-01-13 00:52:27.372921 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-13 00:52:27.372926 | orchestrator | Tuesday 13 January 2026 00:47:49 +0000 (0:00:01.203) 0:01:41.416 ******* 2026-01-13 00:52:27.372932 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.372938 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.372944 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.372950 | orchestrator | 2026-01-13 00:52:27.372956 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-13 00:52:27.372962 | orchestrator | Tuesday 13 January 2026 00:47:52 +0000 (0:00:02.680) 0:01:44.097 ******* 2026-01-13 00:52:27.372967 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.372973 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.372978 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.372984 | orchestrator | 2026-01-13 00:52:27.372989 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-13 00:52:27.372995 | orchestrator | Tuesday 13 January 2026 00:47:53 +0000 (0:00:00.704) 0:01:44.801 ******* 2026-01-13 00:52:27.373001 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.373006 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.373040 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.373046 | orchestrator | 2026-01-13 00:52:27.373051 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-13 00:52:27.373057 | orchestrator | Tuesday 13 January 2026 00:47:53 +0000 (0:00:00.411) 0:01:45.213 ******* 2026-01-13 00:52:27.373062 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.373068 | orchestrator | 2026-01-13 00:52:27.373074 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-13 00:52:27.373079 | orchestrator | Tuesday 13 January 2026 00:47:54 +0000 (0:00:00.853) 0:01:46.067 ******* 2026-01-13 00:52:27.373090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 00:52:27.373097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 00:52:27.373103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 00:52:27.373361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 00:52:27.373397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 00:52:27.373448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 00:52:27.373501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373547 | orchestrator | 2026-01-13 00:52:27.373554 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-13 00:52:27.373560 | orchestrator | Tuesday 13 January 2026 00:48:00 +0000 (0:00:06.153) 0:01:52.220 ******* 2026-01-13 00:52:27.373567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 00:52:27.373610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 00:52:27.373625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373662 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.373718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 00:52:27.373734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 00:52:27.373741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 00:52:27.373752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 00:52:27.373765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.373985 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.373993 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.373999 | orchestrator | 2026-01-13 00:52:27.374006 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-13 00:52:27.374108 | orchestrator | Tuesday 13 January 2026 00:48:01 +0000 (0:00:01.179) 0:01:53.400 ******* 2026-01-13 00:52:27.374118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-13 00:52:27.374127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-13 00:52:27.374136 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.374143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-13 00:52:27.374150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-13 00:52:27.374158 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.374165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-13 00:52:27.374171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-13 00:52:27.374178 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.374185 | orchestrator | 2026-01-13 00:52:27.374192 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-13 00:52:27.374199 | orchestrator | Tuesday 13 January 2026 00:48:03 +0000 (0:00:01.414) 0:01:54.815 ******* 2026-01-13 00:52:27.374206 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.374214 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.374226 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.374241 | orchestrator | 2026-01-13 00:52:27.374249 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-13 00:52:27.374256 | orchestrator | Tuesday 13 January 2026 00:48:05 +0000 (0:00:01.809) 0:01:56.624 ******* 2026-01-13 00:52:27.374263 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.374270 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.374277 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.374284 | orchestrator | 2026-01-13 00:52:27.374292 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-13 00:52:27.374299 | orchestrator | Tuesday 13 January 2026 00:48:06 +0000 (0:00:01.834) 0:01:58.459 ******* 2026-01-13 00:52:27.374306 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.374313 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.374320 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.374327 | orchestrator | 2026-01-13 00:52:27.374334 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-13 00:52:27.374342 | orchestrator | Tuesday 13 January 2026 00:48:07 +0000 (0:00:00.479) 0:01:58.938 ******* 2026-01-13 00:52:27.374349 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.374356 | orchestrator | 2026-01-13 00:52:27.374363 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-13 00:52:27.374377 | orchestrator | Tuesday 13 January 2026 00:48:07 +0000 (0:00:00.614) 0:01:59.553 ******* 2026-01-13 00:52:27.374486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 00:52:27.374506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-13 00:52:27.374515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 00:52:27.374578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-13 00:52:27.374594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 00:52:27.374664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-13 00:52:27.374675 | orchestrator | 2026-01-13 00:52:27.374682 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-13 00:52:27.374688 | orchestrator | Tuesday 13 January 2026 00:48:12 +0000 (0:00:04.617) 0:02:04.170 ******* 2026-01-13 00:52:27.374700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-13 00:52:27.374757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-13 00:52:27.374767 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.374778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-13 00:52:27.374820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-13 00:52:27.374828 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.374839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-13 00:52:27.374872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-13 00:52:27.374885 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.374890 | orchestrator | 2026-01-13 00:52:27.374896 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-13 00:52:27.374918 | orchestrator | Tuesday 13 January 2026 00:48:16 +0000 (0:00:03.548) 0:02:07.719 ******* 2026-01-13 00:52:27.374926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-13 00:52:27.374933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-13 00:52:27.374939 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.374946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-13 00:52:27.374957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-13 00:52:27.374968 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.374974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-13 00:52:27.374981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-13 00:52:27.374986 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.374992 | orchestrator | 2026-01-13 00:52:27.374998 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-13 00:52:27.375003 | orchestrator | Tuesday 13 January 2026 00:48:19 +0000 (0:00:03.101) 0:02:10.820 ******* 2026-01-13 00:52:27.375077 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.375087 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.375094 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.375100 | orchestrator | 2026-01-13 00:52:27.375107 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-13 00:52:27.375114 | orchestrator | Tuesday 13 January 2026 00:48:20 +0000 (0:00:01.342) 0:02:12.163 ******* 2026-01-13 00:52:27.375122 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.375129 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.375136 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.375143 | orchestrator | 2026-01-13 00:52:27.375150 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-13 00:52:27.375243 | orchestrator | Tuesday 13 January 2026 00:48:22 +0000 (0:00:01.898) 0:02:14.061 ******* 2026-01-13 00:52:27.375253 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.375259 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.375265 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.375270 | orchestrator | 2026-01-13 00:52:27.375276 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-13 00:52:27.375281 | orchestrator | Tuesday 13 January 2026 00:48:22 +0000 (0:00:00.431) 0:02:14.493 ******* 2026-01-13 00:52:27.375286 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.375293 | orchestrator | 2026-01-13 00:52:27.375299 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-13 00:52:27.375305 | orchestrator | Tuesday 13 January 2026 00:48:23 +0000 (0:00:00.793) 0:02:15.287 ******* 2026-01-13 00:52:27.375312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 00:52:27.375342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 00:52:27.375349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 00:52:27.375355 | orchestrator | 2026-01-13 00:52:27.375361 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-13 00:52:27.375367 | orchestrator | Tuesday 13 January 2026 00:48:27 +0000 (0:00:03.351) 0:02:18.639 ******* 2026-01-13 00:52:27.375373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-13 00:52:27.375431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-13 00:52:27.375440 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.375446 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.375451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-13 00:52:27.375457 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.375470 | orchestrator | 2026-01-13 00:52:27.375475 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-13 00:52:27.375481 | orchestrator | Tuesday 13 January 2026 00:48:27 +0000 (0:00:00.627) 0:02:19.266 ******* 2026-01-13 00:52:27.375488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-13 00:52:27.375496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-13 00:52:27.375503 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.375509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-13 00:52:27.375515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-13 00:52:27.375521 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.375555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-13 00:52:27.375562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-13 00:52:27.375569 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.375575 | orchestrator | 2026-01-13 00:52:27.375580 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-13 00:52:27.375587 | orchestrator | Tuesday 13 January 2026 00:48:28 +0000 (0:00:00.644) 0:02:19.911 ******* 2026-01-13 00:52:27.375592 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.375598 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.375604 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.375610 | orchestrator | 2026-01-13 00:52:27.375616 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-13 00:52:27.375622 | orchestrator | Tuesday 13 January 2026 00:48:29 +0000 (0:00:01.361) 0:02:21.272 ******* 2026-01-13 00:52:27.375628 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.375634 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.375640 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.375646 | orchestrator | 2026-01-13 00:52:27.375653 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-13 00:52:27.375660 | orchestrator | Tuesday 13 January 2026 00:48:31 +0000 (0:00:02.179) 0:02:23.452 ******* 2026-01-13 00:52:27.375667 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.375673 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.375679 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.375686 | orchestrator | 2026-01-13 00:52:27.375693 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-13 00:52:27.375700 | orchestrator | Tuesday 13 January 2026 00:48:32 +0000 (0:00:00.744) 0:02:24.196 ******* 2026-01-13 00:52:27.375706 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.375712 | orchestrator | 2026-01-13 00:52:27.375719 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-13 00:52:27.375726 | orchestrator | Tuesday 13 January 2026 00:48:33 +0000 (0:00:00.896) 0:02:25.093 ******* 2026-01-13 00:52:27.375816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-13 00:52:27.375843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-13 00:52:27.375890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-13 00:52:27.375898 | orchestrator | 2026-01-13 00:52:27.375905 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-13 00:52:27.375914 | orchestrator | Tuesday 13 January 2026 00:48:37 +0000 (0:00:04.091) 0:02:29.185 ******* 2026-01-13 00:52:27.375946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-13 00:52:27.375959 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.375970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-13 00:52:27.375976 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.376032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-13 00:52:27.376046 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.376053 | orchestrator | 2026-01-13 00:52:27.376060 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-13 00:52:27.376066 | orchestrator | Tuesday 13 January 2026 00:48:38 +0000 (0:00:01.228) 0:02:30.413 ******* 2026-01-13 00:52:27.376074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-13 00:52:27.376083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-13 00:52:27.376092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-13 00:52:27.376100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-13 00:52:27.376113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-13 00:52:27.376120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-13 00:52:27.376127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-13 00:52:27.376134 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.376141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-13 00:52:27.376148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-13 00:52:27.376185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-13 00:52:27.376193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-13 00:52:27.376199 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.376282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-13 00:52:27.376293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-13 00:52:27.376300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-13 00:52:27.376306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-13 00:52:27.376312 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.376318 | orchestrator | 2026-01-13 00:52:27.376324 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-13 00:52:27.376331 | orchestrator | Tuesday 13 January 2026 00:48:39 +0000 (0:00:01.063) 0:02:31.476 ******* 2026-01-13 00:52:27.376337 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.376354 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.376361 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.376367 | orchestrator | 2026-01-13 00:52:27.376373 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-13 00:52:27.376379 | orchestrator | Tuesday 13 January 2026 00:48:41 +0000 (0:00:01.194) 0:02:32.671 ******* 2026-01-13 00:52:27.376385 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.376390 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.376396 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.376402 | orchestrator | 2026-01-13 00:52:27.376407 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-13 00:52:27.376413 | orchestrator | Tuesday 13 January 2026 00:48:43 +0000 (0:00:01.929) 0:02:34.600 ******* 2026-01-13 00:52:27.376419 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.376426 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.376432 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.376439 | orchestrator | 2026-01-13 00:52:27.376451 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-13 00:52:27.376458 | orchestrator | Tuesday 13 January 2026 00:48:43 +0000 (0:00:00.298) 0:02:34.899 ******* 2026-01-13 00:52:27.376465 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.376471 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.376478 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.376490 | orchestrator | 2026-01-13 00:52:27.376497 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-13 00:52:27.376504 | orchestrator | Tuesday 13 January 2026 00:48:43 +0000 (0:00:00.577) 0:02:35.476 ******* 2026-01-13 00:52:27.376510 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.376516 | orchestrator | 2026-01-13 00:52:27.376523 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-13 00:52:27.376529 | orchestrator | Tuesday 13 January 2026 00:48:44 +0000 (0:00:00.921) 0:02:36.398 ******* 2026-01-13 00:52:27.376537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:52:27.376601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:52:27.376611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:52:27.376619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:52:27.376634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:52:27.376669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:52:27.376676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:52:27.376736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:52:27.376746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:52:27.376753 | orchestrator | 2026-01-13 00:52:27.376759 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-13 00:52:27.376767 | orchestrator | Tuesday 13 January 2026 00:48:48 +0000 (0:00:03.384) 0:02:39.783 ******* 2026-01-13 00:52:27.376778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-13 00:52:27.376792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:52:27.376799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:52:27.376806 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.376857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-13 00:52:27.376867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:52:27.376873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:52:27.376903 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.376914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-13 00:52:27.376922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:52:27.376931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:52:27.376939 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.376947 | orchestrator | 2026-01-13 00:52:27.376954 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-13 00:52:27.377032 | orchestrator | Tuesday 13 January 2026 00:48:49 +0000 (0:00:00.908) 0:02:40.691 ******* 2026-01-13 00:52:27.377044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-13 00:52:27.377053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-13 00:52:27.377060 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.377066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-13 00:52:27.377072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-13 00:52:27.377085 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.377091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-13 00:52:27.377098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-13 00:52:27.377104 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.377111 | orchestrator | 2026-01-13 00:52:27.377117 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-13 00:52:27.377124 | orchestrator | Tuesday 13 January 2026 00:48:49 +0000 (0:00:00.795) 0:02:41.487 ******* 2026-01-13 00:52:27.377134 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.377141 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.377147 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.377154 | orchestrator | 2026-01-13 00:52:27.377160 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-13 00:52:27.377167 | orchestrator | Tuesday 13 January 2026 00:48:51 +0000 (0:00:01.286) 0:02:42.773 ******* 2026-01-13 00:52:27.377173 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.377179 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.377185 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.377191 | orchestrator | 2026-01-13 00:52:27.377197 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-13 00:52:27.377203 | orchestrator | Tuesday 13 January 2026 00:48:53 +0000 (0:00:02.094) 0:02:44.868 ******* 2026-01-13 00:52:27.377208 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.377214 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.377219 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.377225 | orchestrator | 2026-01-13 00:52:27.377244 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-13 00:52:27.377250 | orchestrator | Tuesday 13 January 2026 00:48:53 +0000 (0:00:00.628) 0:02:45.497 ******* 2026-01-13 00:52:27.377256 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.377262 | orchestrator | 2026-01-13 00:52:27.377268 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-13 00:52:27.377273 | orchestrator | Tuesday 13 January 2026 00:48:54 +0000 (0:00:00.947) 0:02:46.444 ******* 2026-01-13 00:52:27.377280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 00:52:27.377354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.377372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 00:52:27.377383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 00:52:27.377389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.377395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.377401 | orchestrator | 2026-01-13 00:52:27.377408 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-13 00:52:27.377414 | orchestrator | Tuesday 13 January 2026 00:48:58 +0000 (0:00:03.601) 0:02:50.045 ******* 2026-01-13 00:52:27.377468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-13 00:52:27.377489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.377497 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.377507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-13 00:52:27.377514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.377519 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.377556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-13 00:52:27.377569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.377574 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.377580 | orchestrator | 2026-01-13 00:52:27.377585 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-13 00:52:27.377590 | orchestrator | Tuesday 13 January 2026 00:48:59 +0000 (0:00:01.195) 0:02:51.241 ******* 2026-01-13 00:52:27.377597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-13 00:52:27.377604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-13 00:52:27.377610 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.377616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-13 00:52:27.377622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-13 00:52:27.377628 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.377637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-13 00:52:27.377643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-13 00:52:27.377648 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.377654 | orchestrator | 2026-01-13 00:52:27.377660 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-13 00:52:27.377666 | orchestrator | Tuesday 13 January 2026 00:49:00 +0000 (0:00:00.915) 0:02:52.157 ******* 2026-01-13 00:52:27.377672 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.377696 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.377702 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.377709 | orchestrator | 2026-01-13 00:52:27.377715 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-13 00:52:27.377721 | orchestrator | Tuesday 13 January 2026 00:49:01 +0000 (0:00:01.372) 0:02:53.529 ******* 2026-01-13 00:52:27.377728 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.377734 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.377741 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.377746 | orchestrator | 2026-01-13 00:52:27.377753 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-13 00:52:27.377760 | orchestrator | Tuesday 13 January 2026 00:49:04 +0000 (0:00:02.467) 0:02:55.997 ******* 2026-01-13 00:52:27.377772 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.377778 | orchestrator | 2026-01-13 00:52:27.377784 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-13 00:52:27.377790 | orchestrator | Tuesday 13 January 2026 00:49:05 +0000 (0:00:01.247) 0:02:57.244 ******* 2026-01-13 00:52:27.377796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-13 00:52:27.377869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.377880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.377887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.377901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-13 00:52:27.377908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-13 00:52:27.377964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.377974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.377980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.377986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.377996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.378003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.378071 | orchestrator | 2026-01-13 00:52:27.378080 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-13 00:52:27.378087 | orchestrator | Tuesday 13 January 2026 00:49:09 +0000 (0:00:03.583) 0:03:00.827 ******* 2026-01-13 00:52:27.378148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-13 00:52:27.378157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.378164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.378171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.378177 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.378189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-13 00:52:27.378200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.378207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.378272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.378282 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.378289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-13 00:52:27.378300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.378307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.378343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.378350 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.378356 | orchestrator | 2026-01-13 00:52:27.378363 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-13 00:52:27.378369 | orchestrator | Tuesday 13 January 2026 00:49:09 +0000 (0:00:00.588) 0:03:01.416 ******* 2026-01-13 00:52:27.378376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-13 00:52:27.378384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-13 00:52:27.378390 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.378396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-13 00:52:27.378461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-13 00:52:27.378472 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.378479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-13 00:52:27.378486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-13 00:52:27.378493 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.378499 | orchestrator | 2026-01-13 00:52:27.378506 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-13 00:52:27.378513 | orchestrator | Tuesday 13 January 2026 00:49:10 +0000 (0:00:01.012) 0:03:02.428 ******* 2026-01-13 00:52:27.378520 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.378526 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.378532 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.378539 | orchestrator | 2026-01-13 00:52:27.378563 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-13 00:52:27.378570 | orchestrator | Tuesday 13 January 2026 00:49:12 +0000 (0:00:01.257) 0:03:03.685 ******* 2026-01-13 00:52:27.378577 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.378584 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.378590 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.378596 | orchestrator | 2026-01-13 00:52:27.378602 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-13 00:52:27.378615 | orchestrator | Tuesday 13 January 2026 00:49:14 +0000 (0:00:02.105) 0:03:05.791 ******* 2026-01-13 00:52:27.378622 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.378628 | orchestrator | 2026-01-13 00:52:27.378633 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-13 00:52:27.378639 | orchestrator | Tuesday 13 January 2026 00:49:15 +0000 (0:00:01.405) 0:03:07.197 ******* 2026-01-13 00:52:27.378646 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-13 00:52:27.378652 | orchestrator | 2026-01-13 00:52:27.378659 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-13 00:52:27.378665 | orchestrator | Tuesday 13 January 2026 00:49:18 +0000 (0:00:03.093) 0:03:10.290 ******* 2026-01-13 00:52:27.378679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:52:27.378747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-13 00:52:27.378758 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.378787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:52:27.378805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-13 00:52:27.378815 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.378877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:52:27.378888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-13 00:52:27.378900 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.378907 | orchestrator | 2026-01-13 00:52:27.378913 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-13 00:52:27.378919 | orchestrator | Tuesday 13 January 2026 00:49:20 +0000 (0:00:02.048) 0:03:12.338 ******* 2026-01-13 00:52:27.378931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:52:27.378937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-13 00:52:27.378943 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.378992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:52:27.379068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-13 00:52:27.379079 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.379087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:52:27.379161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-13 00:52:27.379172 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.379185 | orchestrator | 2026-01-13 00:52:27.379191 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-13 00:52:27.379197 | orchestrator | Tuesday 13 January 2026 00:49:23 +0000 (0:00:02.305) 0:03:14.644 ******* 2026-01-13 00:52:27.379205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-13 00:52:27.379212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-13 00:52:27.379219 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.379230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-13 00:52:27.379237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-13 00:52:27.379243 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.379250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-13 00:52:27.379311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-13 00:52:27.379327 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.379334 | orchestrator | 2026-01-13 00:52:27.379341 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-13 00:52:27.379348 | orchestrator | Tuesday 13 January 2026 00:49:26 +0000 (0:00:02.972) 0:03:17.616 ******* 2026-01-13 00:52:27.379355 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.379361 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.379367 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.379374 | orchestrator | 2026-01-13 00:52:27.379381 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-13 00:52:27.379387 | orchestrator | Tuesday 13 January 2026 00:49:27 +0000 (0:00:01.883) 0:03:19.500 ******* 2026-01-13 00:52:27.379392 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.379398 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.379404 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.379410 | orchestrator | 2026-01-13 00:52:27.379417 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-13 00:52:27.379424 | orchestrator | Tuesday 13 January 2026 00:49:29 +0000 (0:00:01.412) 0:03:20.912 ******* 2026-01-13 00:52:27.379431 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.379437 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.379444 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.379450 | orchestrator | 2026-01-13 00:52:27.379455 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-13 00:52:27.379461 | orchestrator | Tuesday 13 January 2026 00:49:29 +0000 (0:00:00.338) 0:03:21.251 ******* 2026-01-13 00:52:27.379468 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.379475 | orchestrator | 2026-01-13 00:52:27.379481 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-13 00:52:27.379487 | orchestrator | Tuesday 13 January 2026 00:49:31 +0000 (0:00:01.415) 0:03:22.667 ******* 2026-01-13 00:52:27.379518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-13 00:52:27.379528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-13 00:52:27.379535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-13 00:52:27.379548 | orchestrator | 2026-01-13 00:52:27.379555 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-13 00:52:27.379562 | orchestrator | Tuesday 13 January 2026 00:49:32 +0000 (0:00:01.701) 0:03:24.368 ******* 2026-01-13 00:52:27.379637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-13 00:52:27.379647 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.379654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-13 00:52:27.379662 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.379673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-13 00:52:27.379680 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.379687 | orchestrator | 2026-01-13 00:52:27.379694 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-13 00:52:27.379701 | orchestrator | Tuesday 13 January 2026 00:49:33 +0000 (0:00:00.445) 0:03:24.814 ******* 2026-01-13 00:52:27.379710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-13 00:52:27.379718 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.379725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-13 00:52:27.379733 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.379746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-13 00:52:27.379753 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.379759 | orchestrator | 2026-01-13 00:52:27.379764 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-13 00:52:27.379770 | orchestrator | Tuesday 13 January 2026 00:49:34 +0000 (0:00:00.844) 0:03:25.659 ******* 2026-01-13 00:52:27.379776 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.379783 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.379790 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.379797 | orchestrator | 2026-01-13 00:52:27.379804 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-13 00:52:27.379811 | orchestrator | Tuesday 13 January 2026 00:49:34 +0000 (0:00:00.447) 0:03:26.106 ******* 2026-01-13 00:52:27.379818 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.379825 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.379832 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.379839 | orchestrator | 2026-01-13 00:52:27.379846 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-13 00:52:27.379853 | orchestrator | Tuesday 13 January 2026 00:49:35 +0000 (0:00:01.252) 0:03:27.358 ******* 2026-01-13 00:52:27.379860 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.379867 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.379874 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.379881 | orchestrator | 2026-01-13 00:52:27.379888 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-13 00:52:27.379943 | orchestrator | Tuesday 13 January 2026 00:49:36 +0000 (0:00:00.320) 0:03:27.679 ******* 2026-01-13 00:52:27.379952 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.379960 | orchestrator | 2026-01-13 00:52:27.379967 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-13 00:52:27.379974 | orchestrator | Tuesday 13 January 2026 00:49:37 +0000 (0:00:01.431) 0:03:29.110 ******* 2026-01-13 00:52:27.379983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 00:52:27.379991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-13 00:52:27.380127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.380146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.380174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 00:52:27.380196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 00:52:27.380249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-13 00:52:27.380305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.380361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-13 00:52:27.380369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-13 00:52:27.380404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.380411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-13 00:52:27.380459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.380469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 00:52:27.380494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-13 00:52:27.380508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.380529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-13 00:52:27.380589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-13 00:52:27.380634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 00:52:27.380643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-13 00:52:27.380724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.380738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.380746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 00:52:27.380804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-13 00:52:27.380823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.380834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-13 00:52:27.380909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-13 00:52:27.380918 | orchestrator | 2026-01-13 00:52:27.380925 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-13 00:52:27.380932 | orchestrator | Tuesday 13 January 2026 00:49:41 +0000 (0:00:04.155) 0:03:33.265 ******* 2026-01-13 00:52:27.380939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 00:52:27.380966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.380985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-13 00:52:27.381061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 00:52:27.381097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.381106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.381120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 00:52:27.381225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-13 00:52:27.381252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-13 00:52:27.381301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.381328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.381339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 00:52:27.381367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.381426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-13 00:52:27.381464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 00:52:27.381483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-13 00:52:27.381491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381503 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.381552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-13 00:52:27.381578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-13 00:52:27.381586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.381593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.381665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-13 00:52:27.381676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.381684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-13 00:52:27.381691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381703 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.381740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 00:52:27.381748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-13 00:52:27.381763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-13 00:52:27.381769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.381776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-13 00:52:27.381805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-13 00:52:27.381813 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.381820 | orchestrator | 2026-01-13 00:52:27.381827 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-13 00:52:27.381835 | orchestrator | Tuesday 13 January 2026 00:49:43 +0000 (0:00:01.410) 0:03:34.676 ******* 2026-01-13 00:52:27.381843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-13 00:52:27.381852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-13 00:52:27.381859 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.381866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-13 00:52:27.381874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-13 00:52:27.381881 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.381888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-13 00:52:27.381895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-13 00:52:27.381902 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.381909 | orchestrator | 2026-01-13 00:52:27.381915 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-13 00:52:27.381922 | orchestrator | Tuesday 13 January 2026 00:49:45 +0000 (0:00:02.092) 0:03:36.769 ******* 2026-01-13 00:52:27.381928 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.381935 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.381942 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.381959 | orchestrator | 2026-01-13 00:52:27.381971 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-13 00:52:27.381978 | orchestrator | Tuesday 13 January 2026 00:49:46 +0000 (0:00:01.139) 0:03:37.908 ******* 2026-01-13 00:52:27.381986 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.381994 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.382001 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.382059 | orchestrator | 2026-01-13 00:52:27.382069 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-13 00:52:27.382076 | orchestrator | Tuesday 13 January 2026 00:49:48 +0000 (0:00:02.021) 0:03:39.930 ******* 2026-01-13 00:52:27.382084 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.382097 | orchestrator | 2026-01-13 00:52:27.382104 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-13 00:52:27.382111 | orchestrator | Tuesday 13 January 2026 00:49:49 +0000 (0:00:01.162) 0:03:41.092 ******* 2026-01-13 00:52:27.382118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.382156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.382165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.382172 | orchestrator | 2026-01-13 00:52:27.382180 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-13 00:52:27.382187 | orchestrator | Tuesday 13 January 2026 00:49:52 +0000 (0:00:03.483) 0:03:44.576 ******* 2026-01-13 00:52:27.382199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.382215 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.382222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.382230 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.382254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.382263 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.382270 | orchestrator | 2026-01-13 00:52:27.382278 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-13 00:52:27.382287 | orchestrator | Tuesday 13 January 2026 00:49:53 +0000 (0:00:00.498) 0:03:45.075 ******* 2026-01-13 00:52:27.382296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382318 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.382328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382346 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.382356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382382 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.382389 | orchestrator | 2026-01-13 00:52:27.382395 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-13 00:52:27.382401 | orchestrator | Tuesday 13 January 2026 00:49:54 +0000 (0:00:00.729) 0:03:45.805 ******* 2026-01-13 00:52:27.382408 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.382420 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.382429 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.382438 | orchestrator | 2026-01-13 00:52:27.382446 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-13 00:52:27.382455 | orchestrator | Tuesday 13 January 2026 00:49:55 +0000 (0:00:01.186) 0:03:46.991 ******* 2026-01-13 00:52:27.382463 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.382472 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.382480 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.382488 | orchestrator | 2026-01-13 00:52:27.382498 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-13 00:52:27.382506 | orchestrator | Tuesday 13 January 2026 00:49:57 +0000 (0:00:02.087) 0:03:49.078 ******* 2026-01-13 00:52:27.382514 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.382522 | orchestrator | 2026-01-13 00:52:27.382530 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-13 00:52:27.382539 | orchestrator | Tuesday 13 January 2026 00:49:59 +0000 (0:00:01.512) 0:03:50.590 ******* 2026-01-13 00:52:27.382550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.382584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.382593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.382610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.382618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.382626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.382652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.382660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.382674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.382681 | orchestrator | 2026-01-13 00:52:27.382687 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-13 00:52:27.382693 | orchestrator | Tuesday 13 January 2026 00:50:03 +0000 (0:00:04.378) 0:03:54.969 ******* 2026-01-13 00:52:27.382706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.382713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.382739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.382748 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.382755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.382771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.382778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.382785 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.382792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.382818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.382830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.382837 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.382844 | orchestrator | 2026-01-13 00:52:27.382851 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-13 00:52:27.382858 | orchestrator | Tuesday 13 January 2026 00:50:04 +0000 (0:00:01.396) 0:03:56.366 ******* 2026-01-13 00:52:27.382865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382897 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.382904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382931 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.382938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-13 00:52:27.382969 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.382976 | orchestrator | 2026-01-13 00:52:27.383005 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-13 00:52:27.383065 | orchestrator | Tuesday 13 January 2026 00:50:05 +0000 (0:00:00.917) 0:03:57.283 ******* 2026-01-13 00:52:27.383073 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.383079 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.383086 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.383092 | orchestrator | 2026-01-13 00:52:27.383099 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-13 00:52:27.383105 | orchestrator | Tuesday 13 January 2026 00:50:07 +0000 (0:00:01.535) 0:03:58.819 ******* 2026-01-13 00:52:27.383112 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.383119 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.383125 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.383132 | orchestrator | 2026-01-13 00:52:27.383139 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-13 00:52:27.383146 | orchestrator | Tuesday 13 January 2026 00:50:09 +0000 (0:00:02.129) 0:04:00.949 ******* 2026-01-13 00:52:27.383152 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.383159 | orchestrator | 2026-01-13 00:52:27.383166 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-13 00:52:27.383173 | orchestrator | Tuesday 13 January 2026 00:50:11 +0000 (0:00:01.708) 0:04:02.657 ******* 2026-01-13 00:52:27.383180 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-13 00:52:27.383187 | orchestrator | 2026-01-13 00:52:27.383193 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-13 00:52:27.383200 | orchestrator | Tuesday 13 January 2026 00:50:11 +0000 (0:00:00.882) 0:04:03.539 ******* 2026-01-13 00:52:27.383207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-13 00:52:27.383220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-13 00:52:27.383228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-13 00:52:27.383235 | orchestrator | 2026-01-13 00:52:27.383242 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-13 00:52:27.383250 | orchestrator | Tuesday 13 January 2026 00:50:16 +0000 (0:00:04.297) 0:04:07.837 ******* 2026-01-13 00:52:27.383257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-13 00:52:27.383269 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.383277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-13 00:52:27.383284 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.383319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-13 00:52:27.383327 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.383334 | orchestrator | 2026-01-13 00:52:27.383341 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-13 00:52:27.383347 | orchestrator | Tuesday 13 January 2026 00:50:17 +0000 (0:00:01.392) 0:04:09.230 ******* 2026-01-13 00:52:27.383354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-13 00:52:27.383362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-13 00:52:27.383370 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.383377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-13 00:52:27.383384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-13 00:52:27.383389 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.383396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-13 00:52:27.383405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-13 00:52:27.383411 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.383418 | orchestrator | 2026-01-13 00:52:27.383425 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-13 00:52:27.383431 | orchestrator | Tuesday 13 January 2026 00:50:19 +0000 (0:00:01.504) 0:04:10.734 ******* 2026-01-13 00:52:27.383438 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.383456 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.383466 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.383472 | orchestrator | 2026-01-13 00:52:27.383479 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-13 00:52:27.383485 | orchestrator | Tuesday 13 January 2026 00:50:21 +0000 (0:00:02.320) 0:04:13.054 ******* 2026-01-13 00:52:27.383492 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.383499 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.383505 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.383513 | orchestrator | 2026-01-13 00:52:27.383519 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-13 00:52:27.383525 | orchestrator | Tuesday 13 January 2026 00:50:24 +0000 (0:00:02.972) 0:04:16.027 ******* 2026-01-13 00:52:27.383534 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-13 00:52:27.383541 | orchestrator | 2026-01-13 00:52:27.383548 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-13 00:52:27.383555 | orchestrator | Tuesday 13 January 2026 00:50:25 +0000 (0:00:01.430) 0:04:17.458 ******* 2026-01-13 00:52:27.383563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-13 00:52:27.383571 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.383602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-13 00:52:27.383610 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.383618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-13 00:52:27.383625 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.383632 | orchestrator | 2026-01-13 00:52:27.383639 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-13 00:52:27.383646 | orchestrator | Tuesday 13 January 2026 00:50:27 +0000 (0:00:01.251) 0:04:18.709 ******* 2026-01-13 00:52:27.383652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-13 00:52:27.383657 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.383672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-13 00:52:27.383714 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.383721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-13 00:52:27.383728 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.383735 | orchestrator | 2026-01-13 00:52:27.383742 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-13 00:52:27.383748 | orchestrator | Tuesday 13 January 2026 00:50:28 +0000 (0:00:01.358) 0:04:20.068 ******* 2026-01-13 00:52:27.383755 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.383762 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.383769 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.383775 | orchestrator | 2026-01-13 00:52:27.383782 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-13 00:52:27.383789 | orchestrator | Tuesday 13 January 2026 00:50:30 +0000 (0:00:01.922) 0:04:21.990 ******* 2026-01-13 00:52:27.383795 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.383802 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.383808 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.383814 | orchestrator | 2026-01-13 00:52:27.383821 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-13 00:52:27.383827 | orchestrator | Tuesday 13 January 2026 00:50:32 +0000 (0:00:02.541) 0:04:24.531 ******* 2026-01-13 00:52:27.383834 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.383841 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.383847 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.383854 | orchestrator | 2026-01-13 00:52:27.383861 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-13 00:52:27.383868 | orchestrator | Tuesday 13 January 2026 00:50:36 +0000 (0:00:03.284) 0:04:27.816 ******* 2026-01-13 00:52:27.383875 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-13 00:52:27.383882 | orchestrator | 2026-01-13 00:52:27.383889 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-13 00:52:27.383896 | orchestrator | Tuesday 13 January 2026 00:50:37 +0000 (0:00:00.926) 0:04:28.743 ******* 2026-01-13 00:52:27.383929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-13 00:52:27.383937 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.383944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-13 00:52:27.383956 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.383964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-13 00:52:27.383971 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.383978 | orchestrator | 2026-01-13 00:52:27.383985 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-13 00:52:27.383993 | orchestrator | Tuesday 13 January 2026 00:50:38 +0000 (0:00:01.304) 0:04:30.047 ******* 2026-01-13 00:52:27.384004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-13 00:52:27.384032 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.384040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-13 00:52:27.384047 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.384054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-13 00:52:27.384061 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.384068 | orchestrator | 2026-01-13 00:52:27.384075 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-13 00:52:27.384082 | orchestrator | Tuesday 13 January 2026 00:50:39 +0000 (0:00:01.339) 0:04:31.387 ******* 2026-01-13 00:52:27.384088 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.384095 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.384102 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.384108 | orchestrator | 2026-01-13 00:52:27.384115 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-13 00:52:27.384122 | orchestrator | Tuesday 13 January 2026 00:50:41 +0000 (0:00:01.591) 0:04:32.979 ******* 2026-01-13 00:52:27.384143 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.384174 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.384182 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.384194 | orchestrator | 2026-01-13 00:52:27.384201 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-13 00:52:27.384208 | orchestrator | Tuesday 13 January 2026 00:50:44 +0000 (0:00:02.648) 0:04:35.627 ******* 2026-01-13 00:52:27.384215 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.384222 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.384228 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.384235 | orchestrator | 2026-01-13 00:52:27.384241 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-13 00:52:27.384248 | orchestrator | Tuesday 13 January 2026 00:50:47 +0000 (0:00:03.544) 0:04:39.172 ******* 2026-01-13 00:52:27.384254 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.384261 | orchestrator | 2026-01-13 00:52:27.384268 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-13 00:52:27.384275 | orchestrator | Tuesday 13 January 2026 00:50:49 +0000 (0:00:01.611) 0:04:40.783 ******* 2026-01-13 00:52:27.384283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.384291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 00:52:27.384299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.384398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.384459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.384467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.384474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 00:52:27.384481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.384512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.384520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.384527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.384559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 00:52:27.384567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.384574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.384584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.384591 | orchestrator | 2026-01-13 00:52:27.384598 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-13 00:52:27.384605 | orchestrator | Tuesday 13 January 2026 00:50:52 +0000 (0:00:03.531) 0:04:44.315 ******* 2026-01-13 00:52:27.384613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.384625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 00:52:27.384650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.384657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.384663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.384669 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.384679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.384686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 00:52:27.384697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.384723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.384730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.384735 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.384741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.384751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 00:52:27.384757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.384768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 00:52:27.384791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 00:52:27.384798 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.384803 | orchestrator | 2026-01-13 00:52:27.384808 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-13 00:52:27.384814 | orchestrator | Tuesday 13 January 2026 00:50:53 +0000 (0:00:00.728) 0:04:45.043 ******* 2026-01-13 00:52:27.384820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-13 00:52:27.384825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-13 00:52:27.384831 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.384837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-13 00:52:27.384843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-13 00:52:27.384848 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.384854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-13 00:52:27.384859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-13 00:52:27.384865 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.384871 | orchestrator | 2026-01-13 00:52:27.384877 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-13 00:52:27.384882 | orchestrator | Tuesday 13 January 2026 00:50:55 +0000 (0:00:01.584) 0:04:46.627 ******* 2026-01-13 00:52:27.384888 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.384894 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.384899 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.384905 | orchestrator | 2026-01-13 00:52:27.384915 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-13 00:52:27.384921 | orchestrator | Tuesday 13 January 2026 00:50:56 +0000 (0:00:01.333) 0:04:47.960 ******* 2026-01-13 00:52:27.384926 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.384937 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.384943 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.384948 | orchestrator | 2026-01-13 00:52:27.384954 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-13 00:52:27.384960 | orchestrator | Tuesday 13 January 2026 00:50:58 +0000 (0:00:02.055) 0:04:50.016 ******* 2026-01-13 00:52:27.384965 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.384971 | orchestrator | 2026-01-13 00:52:27.384977 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-13 00:52:27.384983 | orchestrator | Tuesday 13 January 2026 00:50:59 +0000 (0:00:01.348) 0:04:51.365 ******* 2026-01-13 00:52:27.384991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:52:27.385049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:52:27.385066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:52:27.385079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:52:27.385093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:52:27.385121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:52:27.385129 | orchestrator | 2026-01-13 00:52:27.385135 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-13 00:52:27.385142 | orchestrator | Tuesday 13 January 2026 00:51:05 +0000 (0:00:06.000) 0:04:57.366 ******* 2026-01-13 00:52:27.385149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-13 00:52:27.385164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-13 00:52:27.385176 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.385183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-13 00:52:27.385190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-13 00:52:27.385216 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.385223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-13 00:52:27.385230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-13 00:52:27.385256 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.385264 | orchestrator | 2026-01-13 00:52:27.385275 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-13 00:52:27.385282 | orchestrator | Tuesday 13 January 2026 00:51:06 +0000 (0:00:00.639) 0:04:58.005 ******* 2026-01-13 00:52:27.385287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-13 00:52:27.385295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-13 00:52:27.385301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-13 00:52:27.385308 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.385314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-13 00:52:27.385320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-13 00:52:27.385327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-13 00:52:27.385334 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.385340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-13 00:52:27.385347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-13 00:52:27.385373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-13 00:52:27.385380 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.385387 | orchestrator | 2026-01-13 00:52:27.385393 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-13 00:52:27.385398 | orchestrator | Tuesday 13 January 2026 00:51:07 +0000 (0:00:00.942) 0:04:58.948 ******* 2026-01-13 00:52:27.385404 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.385411 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.385417 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.385424 | orchestrator | 2026-01-13 00:52:27.385430 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-13 00:52:27.385437 | orchestrator | Tuesday 13 January 2026 00:51:08 +0000 (0:00:00.838) 0:04:59.787 ******* 2026-01-13 00:52:27.385449 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.385456 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.385463 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.385469 | orchestrator | 2026-01-13 00:52:27.385476 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-13 00:52:27.385483 | orchestrator | Tuesday 13 January 2026 00:51:09 +0000 (0:00:01.407) 0:05:01.194 ******* 2026-01-13 00:52:27.385489 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.385496 | orchestrator | 2026-01-13 00:52:27.385502 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-13 00:52:27.385509 | orchestrator | Tuesday 13 January 2026 00:51:10 +0000 (0:00:01.382) 0:05:02.576 ******* 2026-01-13 00:52:27.385521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-13 00:52:27.385529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 00:52:27.385537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-13 00:52:27.385580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-13 00:52:27.385593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 00:52:27.385603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 00:52:27.385611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 00:52:27.385619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 00:52:27.385681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 00:52:27.385691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-13 00:52:27.385699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-13 00:52:27.385706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-13 00:52:27.385758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-13 00:52:27.385769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-13 00:52:27.385777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-13 00:52:27.385783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-13 00:52:27.385806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-13 00:52:27.385827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-13 00:52:27.385838 | orchestrator | 2026-01-13 00:52:27.385844 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-13 00:52:27.385849 | orchestrator | Tuesday 13 January 2026 00:51:15 +0000 (0:00:04.524) 0:05:07.101 ******* 2026-01-13 00:52:27.385863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-13 00:52:27.385869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 00:52:27.385875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 00:52:27.385902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-13 00:52:27.385910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-13 00:52:27.385923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-13 00:52:27.385941 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.385964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-13 00:52:27.385971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 00:52:27.385977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.385998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 00:52:27.386004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-13 00:52:27.386110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-13 00:52:27.386119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.386125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.386135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-13 00:52:27.386148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-13 00:52:27.386154 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.386161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 00:52:27.386167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.386172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.386182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 00:52:27.386188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-13 00:52:27.386203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-13 00:52:27.386210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.386215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 00:52:27.386221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-13 00:52:27.386226 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.386232 | orchestrator | 2026-01-13 00:52:27.386238 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-13 00:52:27.386244 | orchestrator | Tuesday 13 January 2026 00:51:16 +0000 (0:00:01.271) 0:05:08.372 ******* 2026-01-13 00:52:27.386254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-13 00:52:27.386261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-13 00:52:27.386267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-13 00:52:27.386282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-13 00:52:27.386289 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.386295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-13 00:52:27.386301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-13 00:52:27.386311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-13 00:52:27.386317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-13 00:52:27.386323 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.386332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-13 00:52:27.386338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-13 00:52:27.386344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-13 00:52:27.386349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-13 00:52:27.386355 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.386361 | orchestrator | 2026-01-13 00:52:27.386367 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-13 00:52:27.386373 | orchestrator | Tuesday 13 January 2026 00:51:17 +0000 (0:00:00.971) 0:05:09.344 ******* 2026-01-13 00:52:27.386378 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.386384 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.386389 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.386395 | orchestrator | 2026-01-13 00:52:27.386400 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-13 00:52:27.386406 | orchestrator | Tuesday 13 January 2026 00:51:18 +0000 (0:00:00.456) 0:05:09.801 ******* 2026-01-13 00:52:27.386412 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.386417 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.386423 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.386428 | orchestrator | 2026-01-13 00:52:27.386433 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-13 00:52:27.386444 | orchestrator | Tuesday 13 January 2026 00:51:19 +0000 (0:00:01.569) 0:05:11.370 ******* 2026-01-13 00:52:27.386451 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.386456 | orchestrator | 2026-01-13 00:52:27.386462 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-13 00:52:27.386467 | orchestrator | Tuesday 13 January 2026 00:51:21 +0000 (0:00:01.857) 0:05:13.228 ******* 2026-01-13 00:52:27.386494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-13 00:52:27.386503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-13 00:52:27.386514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-13 00:52:27.386521 | orchestrator | 2026-01-13 00:52:27.386527 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-13 00:52:27.386533 | orchestrator | Tuesday 13 January 2026 00:51:24 +0000 (0:00:02.556) 0:05:15.785 ******* 2026-01-13 00:52:27.386539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-13 00:52:27.386552 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.386561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-13 00:52:27.386567 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.386572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-13 00:52:27.386578 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.386583 | orchestrator | 2026-01-13 00:52:27.386589 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-13 00:52:27.386599 | orchestrator | Tuesday 13 January 2026 00:51:24 +0000 (0:00:00.397) 0:05:16.183 ******* 2026-01-13 00:52:27.386606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-13 00:52:27.386612 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.386618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-13 00:52:27.386624 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.386630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-13 00:52:27.386635 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.386641 | orchestrator | 2026-01-13 00:52:27.386646 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-13 00:52:27.386651 | orchestrator | Tuesday 13 January 2026 00:51:25 +0000 (0:00:00.981) 0:05:17.164 ******* 2026-01-13 00:52:27.386662 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.386667 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.386673 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.386679 | orchestrator | 2026-01-13 00:52:27.386684 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-13 00:52:27.386690 | orchestrator | Tuesday 13 January 2026 00:51:26 +0000 (0:00:00.428) 0:05:17.593 ******* 2026-01-13 00:52:27.386695 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.386700 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.386705 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.386711 | orchestrator | 2026-01-13 00:52:27.386716 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-13 00:52:27.386722 | orchestrator | Tuesday 13 January 2026 00:51:27 +0000 (0:00:01.307) 0:05:18.901 ******* 2026-01-13 00:52:27.386727 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:52:27.386733 | orchestrator | 2026-01-13 00:52:27.386739 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-13 00:52:27.386744 | orchestrator | Tuesday 13 January 2026 00:51:29 +0000 (0:00:01.742) 0:05:20.644 ******* 2026-01-13 00:52:27.386757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.386764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.386775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.386787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.386796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.386802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-13 00:52:27.386808 | orchestrator | 2026-01-13 00:52:27.386814 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-13 00:52:27.386820 | orchestrator | Tuesday 13 January 2026 00:51:35 +0000 (0:00:06.315) 0:05:26.960 ******* 2026-01-13 00:52:27.386829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.386838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.386844 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.386850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.386859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.386865 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.386871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.386880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-13 00:52:27.386890 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.386896 | orchestrator | 2026-01-13 00:52:27.386901 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-13 00:52:27.386907 | orchestrator | Tuesday 13 January 2026 00:51:35 +0000 (0:00:00.615) 0:05:27.575 ******* 2026-01-13 00:52:27.386913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-13 00:52:27.386919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-13 00:52:27.386925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-13 00:52:27.386930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-13 00:52:27.386936 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.386944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-13 00:52:27.386950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-13 00:52:27.386956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-13 00:52:27.386961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-13 00:52:27.386967 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.386973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-13 00:52:27.386978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-13 00:52:27.386985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-13 00:52:27.386994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-13 00:52:27.387000 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.387005 | orchestrator | 2026-01-13 00:52:27.387038 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-13 00:52:27.387045 | orchestrator | Tuesday 13 January 2026 00:51:37 +0000 (0:00:01.639) 0:05:29.215 ******* 2026-01-13 00:52:27.387051 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.387058 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.387064 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.387070 | orchestrator | 2026-01-13 00:52:27.387076 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-13 00:52:27.387086 | orchestrator | Tuesday 13 January 2026 00:51:38 +0000 (0:00:01.351) 0:05:30.566 ******* 2026-01-13 00:52:27.387092 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.387099 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.387105 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.387111 | orchestrator | 2026-01-13 00:52:27.387117 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-13 00:52:27.387123 | orchestrator | Tuesday 13 January 2026 00:51:41 +0000 (0:00:02.165) 0:05:32.732 ******* 2026-01-13 00:52:27.387129 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.387135 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.387141 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.387147 | orchestrator | 2026-01-13 00:52:27.387153 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-13 00:52:27.387159 | orchestrator | Tuesday 13 January 2026 00:51:41 +0000 (0:00:00.323) 0:05:33.056 ******* 2026-01-13 00:52:27.387165 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.387171 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.387177 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.387183 | orchestrator | 2026-01-13 00:52:27.387189 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-13 00:52:27.387195 | orchestrator | Tuesday 13 January 2026 00:51:41 +0000 (0:00:00.311) 0:05:33.367 ******* 2026-01-13 00:52:27.387201 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.387207 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.387214 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.387220 | orchestrator | 2026-01-13 00:52:27.387226 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-13 00:52:27.387232 | orchestrator | Tuesday 13 January 2026 00:51:42 +0000 (0:00:00.637) 0:05:34.005 ******* 2026-01-13 00:52:27.387238 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.387244 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.387251 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.387257 | orchestrator | 2026-01-13 00:52:27.387263 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-13 00:52:27.387269 | orchestrator | Tuesday 13 January 2026 00:51:42 +0000 (0:00:00.328) 0:05:34.333 ******* 2026-01-13 00:52:27.387275 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.387281 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.387287 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.387294 | orchestrator | 2026-01-13 00:52:27.387300 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-13 00:52:27.387306 | orchestrator | Tuesday 13 January 2026 00:51:43 +0000 (0:00:00.312) 0:05:34.646 ******* 2026-01-13 00:52:27.387313 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.387319 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.387325 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.387332 | orchestrator | 2026-01-13 00:52:27.387338 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-13 00:52:27.387354 | orchestrator | Tuesday 13 January 2026 00:51:43 +0000 (0:00:00.844) 0:05:35.490 ******* 2026-01-13 00:52:27.387360 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.387368 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.387375 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.387382 | orchestrator | 2026-01-13 00:52:27.387388 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-13 00:52:27.387394 | orchestrator | Tuesday 13 January 2026 00:51:44 +0000 (0:00:00.699) 0:05:36.190 ******* 2026-01-13 00:52:27.387400 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.387407 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.387413 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.387419 | orchestrator | 2026-01-13 00:52:27.387426 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-13 00:52:27.387433 | orchestrator | Tuesday 13 January 2026 00:51:44 +0000 (0:00:00.359) 0:05:36.550 ******* 2026-01-13 00:52:27.387440 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.387446 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.387454 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.387460 | orchestrator | 2026-01-13 00:52:27.387467 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-13 00:52:27.387474 | orchestrator | Tuesday 13 January 2026 00:51:45 +0000 (0:00:00.895) 0:05:37.446 ******* 2026-01-13 00:52:27.387481 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.387488 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.387495 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.387502 | orchestrator | 2026-01-13 00:52:27.387509 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-13 00:52:27.387517 | orchestrator | Tuesday 13 January 2026 00:51:47 +0000 (0:00:01.336) 0:05:38.782 ******* 2026-01-13 00:52:27.387523 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.387530 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.387537 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.387544 | orchestrator | 2026-01-13 00:52:27.387551 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-13 00:52:27.387559 | orchestrator | Tuesday 13 January 2026 00:51:48 +0000 (0:00:00.960) 0:05:39.743 ******* 2026-01-13 00:52:27.387566 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.387574 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.387581 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.387588 | orchestrator | 2026-01-13 00:52:27.387594 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-13 00:52:27.387601 | orchestrator | Tuesday 13 January 2026 00:51:57 +0000 (0:00:09.590) 0:05:49.334 ******* 2026-01-13 00:52:27.387607 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.387613 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.387620 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.387626 | orchestrator | 2026-01-13 00:52:27.387632 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-13 00:52:27.387638 | orchestrator | Tuesday 13 January 2026 00:51:58 +0000 (0:00:00.816) 0:05:50.151 ******* 2026-01-13 00:52:27.387645 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.387651 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.387657 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.387663 | orchestrator | 2026-01-13 00:52:27.387669 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-13 00:52:27.387676 | orchestrator | Tuesday 13 January 2026 00:52:07 +0000 (0:00:08.751) 0:05:58.902 ******* 2026-01-13 00:52:27.387682 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.387694 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.387701 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.387708 | orchestrator | 2026-01-13 00:52:27.387714 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-13 00:52:27.387720 | orchestrator | Tuesday 13 January 2026 00:52:11 +0000 (0:00:04.183) 0:06:03.085 ******* 2026-01-13 00:52:27.387726 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:52:27.387738 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:52:27.387745 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:52:27.387752 | orchestrator | 2026-01-13 00:52:27.387759 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-13 00:52:27.387766 | orchestrator | Tuesday 13 January 2026 00:52:15 +0000 (0:00:04.425) 0:06:07.511 ******* 2026-01-13 00:52:27.387773 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.387779 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.387786 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.387793 | orchestrator | 2026-01-13 00:52:27.387800 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-13 00:52:27.387806 | orchestrator | Tuesday 13 January 2026 00:52:16 +0000 (0:00:00.367) 0:06:07.878 ******* 2026-01-13 00:52:27.387813 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.387820 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.387826 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.387834 | orchestrator | 2026-01-13 00:52:27.387840 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-13 00:52:27.387848 | orchestrator | Tuesday 13 January 2026 00:52:16 +0000 (0:00:00.356) 0:06:08.235 ******* 2026-01-13 00:52:27.387854 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.387860 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.387868 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.387874 | orchestrator | 2026-01-13 00:52:27.387881 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-13 00:52:27.387887 | orchestrator | Tuesday 13 January 2026 00:52:17 +0000 (0:00:00.736) 0:06:08.972 ******* 2026-01-13 00:52:27.387894 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.387901 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.387909 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.387915 | orchestrator | 2026-01-13 00:52:27.387922 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-13 00:52:27.387928 | orchestrator | Tuesday 13 January 2026 00:52:17 +0000 (0:00:00.334) 0:06:09.306 ******* 2026-01-13 00:52:27.387935 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.387942 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.387949 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.387956 | orchestrator | 2026-01-13 00:52:27.387963 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-13 00:52:27.387970 | orchestrator | Tuesday 13 January 2026 00:52:18 +0000 (0:00:00.379) 0:06:09.686 ******* 2026-01-13 00:52:27.387977 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:52:27.388033 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:52:27.388042 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:52:27.388048 | orchestrator | 2026-01-13 00:52:27.388055 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-13 00:52:27.388061 | orchestrator | Tuesday 13 January 2026 00:52:18 +0000 (0:00:00.339) 0:06:10.025 ******* 2026-01-13 00:52:27.388067 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.388073 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.388080 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.388086 | orchestrator | 2026-01-13 00:52:27.388093 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-13 00:52:27.388099 | orchestrator | Tuesday 13 January 2026 00:52:23 +0000 (0:00:05.138) 0:06:15.163 ******* 2026-01-13 00:52:27.388106 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:52:27.388113 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:52:27.388119 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:52:27.388126 | orchestrator | 2026-01-13 00:52:27.388133 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:52:27.388140 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-13 00:52:27.388149 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-13 00:52:27.388160 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-13 00:52:27.388166 | orchestrator | 2026-01-13 00:52:27.388172 | orchestrator | 2026-01-13 00:52:27.388179 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:52:27.388185 | orchestrator | Tuesday 13 January 2026 00:52:24 +0000 (0:00:00.907) 0:06:16.071 ******* 2026-01-13 00:52:27.388191 | orchestrator | =============================================================================== 2026-01-13 00:52:27.388197 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.59s 2026-01-13 00:52:27.388203 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.75s 2026-01-13 00:52:27.388210 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.32s 2026-01-13 00:52:27.388216 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.15s 2026-01-13 00:52:27.388223 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.00s 2026-01-13 00:52:27.388229 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.14s 2026-01-13 00:52:27.388235 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.62s 2026-01-13 00:52:27.388242 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.52s 2026-01-13 00:52:27.388248 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.49s 2026-01-13 00:52:27.388260 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.45s 2026-01-13 00:52:27.388266 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.43s 2026-01-13 00:52:27.388272 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.38s 2026-01-13 00:52:27.388279 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.30s 2026-01-13 00:52:27.388284 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.18s 2026-01-13 00:52:27.388290 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.16s 2026-01-13 00:52:27.388297 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.16s 2026-01-13 00:52:27.388303 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.09s 2026-01-13 00:52:27.388309 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.07s 2026-01-13 00:52:27.388316 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.84s 2026-01-13 00:52:27.388322 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.60s 2026-01-13 00:52:27.388328 | orchestrator | 2026-01-13 00:52:27 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:52:27.388335 | orchestrator | 2026-01-13 00:52:27 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:52:27.388341 | orchestrator | 2026-01-13 00:52:27 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:27.388347 | orchestrator | 2026-01-13 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:30.405033 | orchestrator | 2026-01-13 00:52:30 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:52:30.407885 | orchestrator | 2026-01-13 00:52:30 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:52:30.409329 | orchestrator | 2026-01-13 00:52:30 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:30.409407 | orchestrator | 2026-01-13 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:33.557886 | orchestrator | 2026-01-13 00:52:33 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:52:33.558084 | orchestrator | 2026-01-13 00:52:33 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:52:33.558099 | orchestrator | 2026-01-13 00:52:33 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:33.558106 | orchestrator | 2026-01-13 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:36.584436 | orchestrator | 2026-01-13 00:52:36 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:52:36.585176 | orchestrator | 2026-01-13 00:52:36 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:52:36.586115 | orchestrator | 2026-01-13 00:52:36 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:36.586169 | orchestrator | 2026-01-13 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:39.610661 | orchestrator | 2026-01-13 00:52:39 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:52:39.611105 | orchestrator | 2026-01-13 00:52:39 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:52:39.616095 | orchestrator | 2026-01-13 00:52:39 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:39.616131 | orchestrator | 2026-01-13 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:42.646958 | orchestrator | 2026-01-13 00:52:42 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:52:42.647255 | orchestrator | 2026-01-13 00:52:42 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:52:42.648674 | orchestrator | 2026-01-13 00:52:42 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:42.648736 | orchestrator | 2026-01-13 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:45.671673 | orchestrator | 2026-01-13 00:52:45 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:52:45.671751 | orchestrator | 2026-01-13 00:52:45 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:52:45.672200 | orchestrator | 2026-01-13 00:52:45 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:45.672224 | orchestrator | 2026-01-13 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:48.707359 | orchestrator | 2026-01-13 00:52:48 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:52:48.709026 | orchestrator | 2026-01-13 00:52:48 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:52:48.712190 | orchestrator | 2026-01-13 00:52:48 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:48.712284 | orchestrator | 2026-01-13 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:51.752772 | orchestrator | 2026-01-13 00:52:51 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:52:51.752865 | orchestrator | 2026-01-13 00:52:51 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:52:51.753857 | orchestrator | 2026-01-13 00:52:51 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:51.753912 | orchestrator | 2026-01-13 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:54.790574 | orchestrator | 2026-01-13 00:52:54 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:52:54.791947 | orchestrator | 2026-01-13 00:52:54 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:52:54.793145 | orchestrator | 2026-01-13 00:52:54 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:54.793567 | orchestrator | 2026-01-13 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:52:57.844916 | orchestrator | 2026-01-13 00:52:57 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:52:57.845553 | orchestrator | 2026-01-13 00:52:57 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:52:57.846678 | orchestrator | 2026-01-13 00:52:57 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:52:57.846714 | orchestrator | 2026-01-13 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:00.920392 | orchestrator | 2026-01-13 00:53:00 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:00.920870 | orchestrator | 2026-01-13 00:53:00 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:00.921650 | orchestrator | 2026-01-13 00:53:00 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:00.921844 | orchestrator | 2026-01-13 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:03.966707 | orchestrator | 2026-01-13 00:53:03 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:03.967665 | orchestrator | 2026-01-13 00:53:03 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:03.970332 | orchestrator | 2026-01-13 00:53:03 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:03.970378 | orchestrator | 2026-01-13 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:07.019462 | orchestrator | 2026-01-13 00:53:07 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:07.023583 | orchestrator | 2026-01-13 00:53:07 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:07.028437 | orchestrator | 2026-01-13 00:53:07 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:07.028491 | orchestrator | 2026-01-13 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:10.087630 | orchestrator | 2026-01-13 00:53:10 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:10.088674 | orchestrator | 2026-01-13 00:53:10 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:10.091429 | orchestrator | 2026-01-13 00:53:10 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:10.091453 | orchestrator | 2026-01-13 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:13.137527 | orchestrator | 2026-01-13 00:53:13 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:13.138395 | orchestrator | 2026-01-13 00:53:13 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:13.139982 | orchestrator | 2026-01-13 00:53:13 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:13.140021 | orchestrator | 2026-01-13 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:16.181811 | orchestrator | 2026-01-13 00:53:16 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:16.182504 | orchestrator | 2026-01-13 00:53:16 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:16.183357 | orchestrator | 2026-01-13 00:53:16 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:16.183371 | orchestrator | 2026-01-13 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:19.237900 | orchestrator | 2026-01-13 00:53:19 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:19.239126 | orchestrator | 2026-01-13 00:53:19 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:19.241379 | orchestrator | 2026-01-13 00:53:19 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:19.241415 | orchestrator | 2026-01-13 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:22.303176 | orchestrator | 2026-01-13 00:53:22 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:22.304872 | orchestrator | 2026-01-13 00:53:22 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:22.306301 | orchestrator | 2026-01-13 00:53:22 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:22.306355 | orchestrator | 2026-01-13 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:25.359604 | orchestrator | 2026-01-13 00:53:25 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:25.361247 | orchestrator | 2026-01-13 00:53:25 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:25.363382 | orchestrator | 2026-01-13 00:53:25 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:25.363434 | orchestrator | 2026-01-13 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:28.407614 | orchestrator | 2026-01-13 00:53:28 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:28.408228 | orchestrator | 2026-01-13 00:53:28 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:28.410184 | orchestrator | 2026-01-13 00:53:28 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:28.410214 | orchestrator | 2026-01-13 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:31.466732 | orchestrator | 2026-01-13 00:53:31 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:31.468699 | orchestrator | 2026-01-13 00:53:31 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:31.470984 | orchestrator | 2026-01-13 00:53:31 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:31.471147 | orchestrator | 2026-01-13 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:34.530969 | orchestrator | 2026-01-13 00:53:34 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:34.532786 | orchestrator | 2026-01-13 00:53:34 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:34.536151 | orchestrator | 2026-01-13 00:53:34 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:34.536615 | orchestrator | 2026-01-13 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:37.578754 | orchestrator | 2026-01-13 00:53:37 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:37.580064 | orchestrator | 2026-01-13 00:53:37 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:37.581607 | orchestrator | 2026-01-13 00:53:37 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:37.581665 | orchestrator | 2026-01-13 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:40.622597 | orchestrator | 2026-01-13 00:53:40 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:40.624943 | orchestrator | 2026-01-13 00:53:40 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:40.626182 | orchestrator | 2026-01-13 00:53:40 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:40.626222 | orchestrator | 2026-01-13 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:43.672566 | orchestrator | 2026-01-13 00:53:43 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:43.674565 | orchestrator | 2026-01-13 00:53:43 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:43.676382 | orchestrator | 2026-01-13 00:53:43 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:43.676402 | orchestrator | 2026-01-13 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:46.714521 | orchestrator | 2026-01-13 00:53:46 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:46.714584 | orchestrator | 2026-01-13 00:53:46 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:46.715441 | orchestrator | 2026-01-13 00:53:46 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:46.715471 | orchestrator | 2026-01-13 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:49.755966 | orchestrator | 2026-01-13 00:53:49 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:49.756518 | orchestrator | 2026-01-13 00:53:49 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:49.757672 | orchestrator | 2026-01-13 00:53:49 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:49.757706 | orchestrator | 2026-01-13 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:52.809207 | orchestrator | 2026-01-13 00:53:52 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:52.809269 | orchestrator | 2026-01-13 00:53:52 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:52.809280 | orchestrator | 2026-01-13 00:53:52 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:52.809287 | orchestrator | 2026-01-13 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:55.863096 | orchestrator | 2026-01-13 00:53:55 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:55.865284 | orchestrator | 2026-01-13 00:53:55 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:55.867441 | orchestrator | 2026-01-13 00:53:55 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:55.867479 | orchestrator | 2026-01-13 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:53:58.919293 | orchestrator | 2026-01-13 00:53:58 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:53:58.921254 | orchestrator | 2026-01-13 00:53:58 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:53:58.924660 | orchestrator | 2026-01-13 00:53:58 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:53:58.924754 | orchestrator | 2026-01-13 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:01.979076 | orchestrator | 2026-01-13 00:54:01 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:01.980583 | orchestrator | 2026-01-13 00:54:01 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:01.982580 | orchestrator | 2026-01-13 00:54:01 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:01.982817 | orchestrator | 2026-01-13 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:05.058433 | orchestrator | 2026-01-13 00:54:05 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:05.061422 | orchestrator | 2026-01-13 00:54:05 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:05.063113 | orchestrator | 2026-01-13 00:54:05 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:05.063166 | orchestrator | 2026-01-13 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:08.108591 | orchestrator | 2026-01-13 00:54:08 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:08.112525 | orchestrator | 2026-01-13 00:54:08 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:08.116336 | orchestrator | 2026-01-13 00:54:08 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:08.116400 | orchestrator | 2026-01-13 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:11.165136 | orchestrator | 2026-01-13 00:54:11 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:11.167219 | orchestrator | 2026-01-13 00:54:11 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:11.169050 | orchestrator | 2026-01-13 00:54:11 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:11.169111 | orchestrator | 2026-01-13 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:14.220083 | orchestrator | 2026-01-13 00:54:14 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:14.220479 | orchestrator | 2026-01-13 00:54:14 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:14.222127 | orchestrator | 2026-01-13 00:54:14 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:14.222161 | orchestrator | 2026-01-13 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:17.276088 | orchestrator | 2026-01-13 00:54:17 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:17.278570 | orchestrator | 2026-01-13 00:54:17 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:17.279413 | orchestrator | 2026-01-13 00:54:17 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:17.279444 | orchestrator | 2026-01-13 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:20.324556 | orchestrator | 2026-01-13 00:54:20 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:20.324655 | orchestrator | 2026-01-13 00:54:20 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:20.326206 | orchestrator | 2026-01-13 00:54:20 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:20.326389 | orchestrator | 2026-01-13 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:23.380237 | orchestrator | 2026-01-13 00:54:23 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:23.383039 | orchestrator | 2026-01-13 00:54:23 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:23.384873 | orchestrator | 2026-01-13 00:54:23 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:23.384950 | orchestrator | 2026-01-13 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:26.437239 | orchestrator | 2026-01-13 00:54:26 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:26.438869 | orchestrator | 2026-01-13 00:54:26 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:26.440325 | orchestrator | 2026-01-13 00:54:26 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:26.440357 | orchestrator | 2026-01-13 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:29.494104 | orchestrator | 2026-01-13 00:54:29 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:29.494878 | orchestrator | 2026-01-13 00:54:29 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:29.496457 | orchestrator | 2026-01-13 00:54:29 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:29.496503 | orchestrator | 2026-01-13 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:32.543560 | orchestrator | 2026-01-13 00:54:32 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:32.545605 | orchestrator | 2026-01-13 00:54:32 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:32.547242 | orchestrator | 2026-01-13 00:54:32 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:32.547291 | orchestrator | 2026-01-13 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:35.597402 | orchestrator | 2026-01-13 00:54:35 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:35.599203 | orchestrator | 2026-01-13 00:54:35 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:35.602561 | orchestrator | 2026-01-13 00:54:35 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:35.602634 | orchestrator | 2026-01-13 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:38.649155 | orchestrator | 2026-01-13 00:54:38 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:38.650467 | orchestrator | 2026-01-13 00:54:38 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:38.652541 | orchestrator | 2026-01-13 00:54:38 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:38.652579 | orchestrator | 2026-01-13 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:41.700345 | orchestrator | 2026-01-13 00:54:41 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:41.701396 | orchestrator | 2026-01-13 00:54:41 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:41.703191 | orchestrator | 2026-01-13 00:54:41 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:41.703235 | orchestrator | 2026-01-13 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:44.746544 | orchestrator | 2026-01-13 00:54:44 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:44.748972 | orchestrator | 2026-01-13 00:54:44 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:44.750999 | orchestrator | 2026-01-13 00:54:44 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:44.751056 | orchestrator | 2026-01-13 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:47.802868 | orchestrator | 2026-01-13 00:54:47 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:47.804513 | orchestrator | 2026-01-13 00:54:47 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:47.805965 | orchestrator | 2026-01-13 00:54:47 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:47.806005 | orchestrator | 2026-01-13 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:50.853188 | orchestrator | 2026-01-13 00:54:50 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:50.854850 | orchestrator | 2026-01-13 00:54:50 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:50.857043 | orchestrator | 2026-01-13 00:54:50 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:50.857097 | orchestrator | 2026-01-13 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:53.906410 | orchestrator | 2026-01-13 00:54:53 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:53.908388 | orchestrator | 2026-01-13 00:54:53 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:53.910946 | orchestrator | 2026-01-13 00:54:53 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:53.911016 | orchestrator | 2026-01-13 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:54:56.960185 | orchestrator | 2026-01-13 00:54:56 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:54:56.962322 | orchestrator | 2026-01-13 00:54:56 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:54:56.964283 | orchestrator | 2026-01-13 00:54:56 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:54:56.964351 | orchestrator | 2026-01-13 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:00.028842 | orchestrator | 2026-01-13 00:55:00 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:55:00.030357 | orchestrator | 2026-01-13 00:55:00 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:55:00.033105 | orchestrator | 2026-01-13 00:55:00 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:55:00.033189 | orchestrator | 2026-01-13 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:03.084658 | orchestrator | 2026-01-13 00:55:03 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state STARTED 2026-01-13 00:55:03.090152 | orchestrator | 2026-01-13 00:55:03 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:55:03.096523 | orchestrator | 2026-01-13 00:55:03 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:55:03.096577 | orchestrator | 2026-01-13 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:06.148278 | orchestrator | 2026-01-13 00:55:06.148342 | orchestrator | 2026-01-13 00:55:06.148352 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 00:55:06.148373 | orchestrator | 2026-01-13 00:55:06.148379 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 00:55:06.148385 | orchestrator | Tuesday 13 January 2026 00:52:29 +0000 (0:00:00.287) 0:00:00.287 ******* 2026-01-13 00:55:06.148391 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:06.148400 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:06.148406 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:06.148411 | orchestrator | 2026-01-13 00:55:06.148417 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 00:55:06.148427 | orchestrator | Tuesday 13 January 2026 00:52:30 +0000 (0:00:00.286) 0:00:00.574 ******* 2026-01-13 00:55:06.148438 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-13 00:55:06.148449 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-13 00:55:06.148459 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-13 00:55:06.148469 | orchestrator | 2026-01-13 00:55:06.148479 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-13 00:55:06.148489 | orchestrator | 2026-01-13 00:55:06.148568 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-13 00:55:06.148580 | orchestrator | Tuesday 13 January 2026 00:52:30 +0000 (0:00:00.456) 0:00:01.031 ******* 2026-01-13 00:55:06.148587 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:06.148593 | orchestrator | 2026-01-13 00:55:06.148599 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-13 00:55:06.148605 | orchestrator | Tuesday 13 January 2026 00:52:31 +0000 (0:00:00.475) 0:00:01.507 ******* 2026-01-13 00:55:06.148611 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-13 00:55:06.148616 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-13 00:55:06.148622 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-13 00:55:06.148628 | orchestrator | 2026-01-13 00:55:06.148633 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-13 00:55:06.148639 | orchestrator | Tuesday 13 January 2026 00:52:31 +0000 (0:00:00.707) 0:00:02.214 ******* 2026-01-13 00:55:06.148655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:55:06.148867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:55:06.148912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:55:06.148926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:55:06.148944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:55:06.148957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:55:06.148974 | orchestrator | 2026-01-13 00:55:06.148981 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-13 00:55:06.148988 | orchestrator | Tuesday 13 January 2026 00:52:34 +0000 (0:00:02.228) 0:00:04.442 ******* 2026-01-13 00:55:06.148994 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:06.148999 | orchestrator | 2026-01-13 00:55:06.149005 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-13 00:55:06.149011 | orchestrator | Tuesday 13 January 2026 00:52:34 +0000 (0:00:00.574) 0:00:05.017 ******* 2026-01-13 00:55:06.149024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:55:06.149031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:55:06.149040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:55:06.149047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:55:06.149061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:55:06.149068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:55:06.149074 | orchestrator | 2026-01-13 00:55:06.149080 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-13 00:55:06.149086 | orchestrator | Tuesday 13 January 2026 00:52:37 +0000 (0:00:03.053) 0:00:08.071 ******* 2026-01-13 00:55:06.149095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-13 00:55:06.149101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-13 00:55:06.149111 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:06.149118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-13 00:55:06.149134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-13 00:55:06.149144 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:06.149155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-13 00:55:06.149170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-13 00:55:06.149181 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:06.149187 | orchestrator | 2026-01-13 00:55:06.149193 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-13 00:55:06.149199 | orchestrator | Tuesday 13 January 2026 00:52:38 +0000 (0:00:00.996) 0:00:09.067 ******* 2026-01-13 00:55:06.149205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-13 00:55:06.149216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-13 00:55:06.149223 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:06.149229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-13 00:55:06.149238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-13 00:55:06.149248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-13 00:55:06.149260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-13 00:55:06.149267 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:06.149272 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:06.149278 | orchestrator | 2026-01-13 00:55:06.149284 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-13 00:55:06.149290 | orchestrator | Tuesday 13 January 2026 00:52:39 +0000 (0:00:00.818) 0:00:09.886 ******* 2026-01-13 00:55:06.149296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:55:06.149309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:55:06.149319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:55:06.149330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:55:06.149336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:55:06.149346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:55:06.149355 | orchestrator | 2026-01-13 00:55:06.149361 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-13 00:55:06.149367 | orchestrator | Tuesday 13 January 2026 00:52:41 +0000 (0:00:02.177) 0:00:12.064 ******* 2026-01-13 00:55:06.149373 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:06.149378 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:06.149384 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:06.149390 | orchestrator | 2026-01-13 00:55:06.149396 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-13 00:55:06.149402 | orchestrator | Tuesday 13 January 2026 00:52:44 +0000 (0:00:03.351) 0:00:15.415 ******* 2026-01-13 00:55:06.149407 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:06.149413 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:06.149419 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:06.149424 | orchestrator | 2026-01-13 00:55:06.149430 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-13 00:55:06.149436 | orchestrator | Tuesday 13 January 2026 00:52:46 +0000 (0:00:01.693) 0:00:17.109 ******* 2026-01-13 00:55:06.149442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:55:06.149452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:55:06.149531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-13 00:55:06.149548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:55:06.149560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:55:06.149576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-13 00:55:06.149587 | orchestrator | 2026-01-13 00:55:06.149596 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-13 00:55:06.149605 | orchestrator | Tuesday 13 January 2026 00:52:48 +0000 (0:00:02.152) 0:00:19.262 ******* 2026-01-13 00:55:06.149615 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:06.149625 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:06.149636 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:06.149645 | orchestrator | 2026-01-13 00:55:06.149655 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-13 00:55:06.149671 | orchestrator | Tuesday 13 January 2026 00:52:49 +0000 (0:00:00.256) 0:00:19.519 ******* 2026-01-13 00:55:06.149682 | orchestrator | 2026-01-13 00:55:06.149692 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-13 00:55:06.149702 | orchestrator | Tuesday 13 January 2026 00:52:49 +0000 (0:00:00.058) 0:00:19.577 ******* 2026-01-13 00:55:06.149712 | orchestrator | 2026-01-13 00:55:06.149722 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-13 00:55:06.149731 | orchestrator | Tuesday 13 January 2026 00:52:49 +0000 (0:00:00.058) 0:00:19.636 ******* 2026-01-13 00:55:06.149741 | orchestrator | 2026-01-13 00:55:06.149753 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-13 00:55:06.149781 | orchestrator | Tuesday 13 January 2026 00:52:49 +0000 (0:00:00.060) 0:00:19.696 ******* 2026-01-13 00:55:06.149792 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:06.149802 | orchestrator | 2026-01-13 00:55:06.149811 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-13 00:55:06.149817 | orchestrator | Tuesday 13 January 2026 00:52:49 +0000 (0:00:00.191) 0:00:19.888 ******* 2026-01-13 00:55:06.149822 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:06.149828 | orchestrator | 2026-01-13 00:55:06.149834 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-13 00:55:06.149844 | orchestrator | Tuesday 13 January 2026 00:52:49 +0000 (0:00:00.457) 0:00:20.346 ******* 2026-01-13 00:55:06.149849 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:06.149855 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:06.149861 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:06.149867 | orchestrator | 2026-01-13 00:55:06.149872 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-13 00:55:06.149878 | orchestrator | Tuesday 13 January 2026 00:53:46 +0000 (0:00:56.663) 0:01:17.009 ******* 2026-01-13 00:55:06.149884 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:06.149889 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:06.149895 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:06.149900 | orchestrator | 2026-01-13 00:55:06.149906 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-13 00:55:06.149912 | orchestrator | Tuesday 13 January 2026 00:54:50 +0000 (0:01:03.539) 0:02:20.549 ******* 2026-01-13 00:55:06.149917 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:06.149923 | orchestrator | 2026-01-13 00:55:06.149929 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-13 00:55:06.149934 | orchestrator | Tuesday 13 January 2026 00:54:50 +0000 (0:00:00.709) 0:02:21.258 ******* 2026-01-13 00:55:06.149940 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:06.149946 | orchestrator | 2026-01-13 00:55:06.149952 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-01-13 00:55:06.149960 | orchestrator | Tuesday 13 January 2026 00:54:53 +0000 (0:00:02.488) 0:02:23.746 ******* 2026-01-13 00:55:06.149970 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:06.149979 | orchestrator | 2026-01-13 00:55:06.149989 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-13 00:55:06.149998 | orchestrator | Tuesday 13 January 2026 00:54:55 +0000 (0:00:02.171) 0:02:25.918 ******* 2026-01-13 00:55:06.150007 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:06.150062 | orchestrator | 2026-01-13 00:55:06.150073 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-13 00:55:06.150083 | orchestrator | Tuesday 13 January 2026 00:54:58 +0000 (0:00:02.612) 0:02:28.531 ******* 2026-01-13 00:55:06.150092 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:06.150098 | orchestrator | 2026-01-13 00:55:06.150104 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-13 00:55:06.150109 | orchestrator | Tuesday 13 January 2026 00:55:01 +0000 (0:00:03.225) 0:02:31.756 ******* 2026-01-13 00:55:06.150115 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:06.150126 | orchestrator | 2026-01-13 00:55:06.150132 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:55:06.150138 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-13 00:55:06.150145 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-13 00:55:06.150158 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-13 00:55:06.150165 | orchestrator | 2026-01-13 00:55:06.150171 | orchestrator | 2026-01-13 00:55:06.150176 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:55:06.150182 | orchestrator | Tuesday 13 January 2026 00:55:03 +0000 (0:00:02.498) 0:02:34.255 ******* 2026-01-13 00:55:06.150188 | orchestrator | =============================================================================== 2026-01-13 00:55:06.150193 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 63.54s 2026-01-13 00:55:06.150199 | orchestrator | opensearch : Restart opensearch container ------------------------------ 56.66s 2026-01-13 00:55:06.150205 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.35s 2026-01-13 00:55:06.150211 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.23s 2026-01-13 00:55:06.150216 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.05s 2026-01-13 00:55:06.150226 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.61s 2026-01-13 00:55:06.150239 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.50s 2026-01-13 00:55:06.150252 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.49s 2026-01-13 00:55:06.150261 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.23s 2026-01-13 00:55:06.150271 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.18s 2026-01-13 00:55:06.150280 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.17s 2026-01-13 00:55:06.150289 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.15s 2026-01-13 00:55:06.150297 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.69s 2026-01-13 00:55:06.150307 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.00s 2026-01-13 00:55:06.150317 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.82s 2026-01-13 00:55:06.150327 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.71s 2026-01-13 00:55:06.150337 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.71s 2026-01-13 00:55:06.150348 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-01-13 00:55:06.150357 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2026-01-13 00:55:06.150367 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.46s 2026-01-13 00:55:06.150383 | orchestrator | 2026-01-13 00:55:06 | INFO  | Task 99df277e-6bfc-4fe3-b863-227d13b150e2 is in state SUCCESS 2026-01-13 00:55:06.150393 | orchestrator | 2026-01-13 00:55:06 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:55:06.150402 | orchestrator | 2026-01-13 00:55:06 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state STARTED 2026-01-13 00:55:06.150411 | orchestrator | 2026-01-13 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:09.203742 | orchestrator | 2026-01-13 00:55:09 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:09.205798 | orchestrator | 2026-01-13 00:55:09 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:55:09.213059 | orchestrator | 2026-01-13 00:55:09 | INFO  | Task 15d62ab2-5891-4568-a275-4a6f5a011aee is in state SUCCESS 2026-01-13 00:55:09.216034 | orchestrator | 2026-01-13 00:55:09.216126 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-13 00:55:09.216137 | orchestrator | 2.16.14 2026-01-13 00:55:09.216145 | orchestrator | 2026-01-13 00:55:09.216153 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-13 00:55:09.216159 | orchestrator | 2026-01-13 00:55:09.216165 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-13 00:55:09.216171 | orchestrator | Tuesday 13 January 2026 00:43:48 +0000 (0:00:00.666) 0:00:00.666 ******* 2026-01-13 00:55:09.216178 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.216186 | orchestrator | 2026-01-13 00:55:09.216192 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-13 00:55:09.216198 | orchestrator | Tuesday 13 January 2026 00:43:49 +0000 (0:00:01.021) 0:00:01.687 ******* 2026-01-13 00:55:09.216205 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.216211 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.216217 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.216223 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.216229 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.216236 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.216242 | orchestrator | 2026-01-13 00:55:09.216248 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-13 00:55:09.216255 | orchestrator | Tuesday 13 January 2026 00:43:51 +0000 (0:00:01.566) 0:00:03.254 ******* 2026-01-13 00:55:09.216261 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.216267 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.216274 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.216280 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.216286 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.216293 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.216355 | orchestrator | 2026-01-13 00:55:09.216362 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-13 00:55:09.216369 | orchestrator | Tuesday 13 January 2026 00:43:51 +0000 (0:00:00.735) 0:00:03.990 ******* 2026-01-13 00:55:09.216376 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.216383 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.216389 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.216396 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.216402 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.216409 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.216415 | orchestrator | 2026-01-13 00:55:09.216422 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-13 00:55:09.216429 | orchestrator | Tuesday 13 January 2026 00:43:52 +0000 (0:00:00.897) 0:00:04.887 ******* 2026-01-13 00:55:09.216436 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.216443 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.216450 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.216457 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.216847 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.216868 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.216875 | orchestrator | 2026-01-13 00:55:09.216882 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-13 00:55:09.216889 | orchestrator | Tuesday 13 January 2026 00:43:53 +0000 (0:00:00.660) 0:00:05.548 ******* 2026-01-13 00:55:09.216896 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.216903 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.216909 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.216915 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.216922 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.216928 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.216947 | orchestrator | 2026-01-13 00:55:09.216954 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-13 00:55:09.216960 | orchestrator | Tuesday 13 January 2026 00:43:53 +0000 (0:00:00.516) 0:00:06.065 ******* 2026-01-13 00:55:09.216967 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.216973 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.216979 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.216986 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.216992 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.216999 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.217005 | orchestrator | 2026-01-13 00:55:09.217012 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-13 00:55:09.217018 | orchestrator | Tuesday 13 January 2026 00:43:54 +0000 (0:00:00.809) 0:00:06.874 ******* 2026-01-13 00:55:09.217024 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.217032 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.217038 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.217044 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.217050 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.217119 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.217126 | orchestrator | 2026-01-13 00:55:09.217133 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-13 00:55:09.217139 | orchestrator | Tuesday 13 January 2026 00:43:55 +0000 (0:00:00.680) 0:00:07.555 ******* 2026-01-13 00:55:09.217146 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.217152 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.217158 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.217220 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.217229 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.217236 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.217454 | orchestrator | 2026-01-13 00:55:09.217466 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-13 00:55:09.217472 | orchestrator | Tuesday 13 January 2026 00:43:56 +0000 (0:00:00.641) 0:00:08.197 ******* 2026-01-13 00:55:09.217479 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-13 00:55:09.217485 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-13 00:55:09.217492 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-13 00:55:09.217498 | orchestrator | 2026-01-13 00:55:09.217504 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-13 00:55:09.217510 | orchestrator | Tuesday 13 January 2026 00:43:56 +0000 (0:00:00.642) 0:00:08.839 ******* 2026-01-13 00:55:09.217517 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.217523 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.217529 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.217555 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.217562 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.217568 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.217575 | orchestrator | 2026-01-13 00:55:09.217581 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-13 00:55:09.217588 | orchestrator | Tuesday 13 January 2026 00:43:57 +0000 (0:00:01.208) 0:00:10.048 ******* 2026-01-13 00:55:09.217594 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-13 00:55:09.217600 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-13 00:55:09.217607 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-13 00:55:09.217613 | orchestrator | 2026-01-13 00:55:09.217619 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-13 00:55:09.217625 | orchestrator | Tuesday 13 January 2026 00:44:01 +0000 (0:00:03.342) 0:00:13.391 ******* 2026-01-13 00:55:09.217632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-13 00:55:09.217638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-13 00:55:09.217654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-13 00:55:09.217662 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.217668 | orchestrator | 2026-01-13 00:55:09.217674 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-13 00:55:09.217680 | orchestrator | Tuesday 13 January 2026 00:44:02 +0000 (0:00:00.772) 0:00:14.163 ******* 2026-01-13 00:55:09.217687 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.217696 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.217702 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.217709 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.217715 | orchestrator | 2026-01-13 00:55:09.217721 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-13 00:55:09.217728 | orchestrator | Tuesday 13 January 2026 00:44:03 +0000 (0:00:01.044) 0:00:15.208 ******* 2026-01-13 00:55:09.217735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.217743 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.217792 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.217805 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.217811 | orchestrator | 2026-01-13 00:55:09.217817 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-13 00:55:09.217824 | orchestrator | Tuesday 13 January 2026 00:44:03 +0000 (0:00:00.405) 0:00:15.614 ******* 2026-01-13 00:55:09.217849 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-13 00:43:58.698129', 'end': '2026-01-13 00:43:59.016841', 'delta': '0:00:00.318712', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.217859 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-13 00:43:59.859242', 'end': '2026-01-13 00:44:00.158616', 'delta': '0:00:00.299374', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.217910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-13 00:44:00.795375', 'end': '2026-01-13 00:44:01.103968', 'delta': '0:00:00.308593', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.217917 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.217924 | orchestrator | 2026-01-13 00:55:09.217930 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-13 00:55:09.218216 | orchestrator | Tuesday 13 January 2026 00:44:03 +0000 (0:00:00.285) 0:00:15.899 ******* 2026-01-13 00:55:09.218224 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.218230 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.218237 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.218270 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.218277 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.218283 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.218289 | orchestrator | 2026-01-13 00:55:09.218295 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-13 00:55:09.218302 | orchestrator | Tuesday 13 January 2026 00:44:05 +0000 (0:00:01.664) 0:00:17.564 ******* 2026-01-13 00:55:09.218308 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-13 00:55:09.218315 | orchestrator | 2026-01-13 00:55:09.218321 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-13 00:55:09.218327 | orchestrator | Tuesday 13 January 2026 00:44:06 +0000 (0:00:00.757) 0:00:18.322 ******* 2026-01-13 00:55:09.218333 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.218340 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.218346 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.218352 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.218358 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.218364 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.218370 | orchestrator | 2026-01-13 00:55:09.218376 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-13 00:55:09.218382 | orchestrator | Tuesday 13 January 2026 00:44:08 +0000 (0:00:02.089) 0:00:20.412 ******* 2026-01-13 00:55:09.218387 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.218393 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.218399 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.218404 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.218410 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.218416 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.218422 | orchestrator | 2026-01-13 00:55:09.218428 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-13 00:55:09.218433 | orchestrator | Tuesday 13 January 2026 00:44:10 +0000 (0:00:01.944) 0:00:22.356 ******* 2026-01-13 00:55:09.218439 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.218450 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.218456 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.218465 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.218471 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.218476 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.218482 | orchestrator | 2026-01-13 00:55:09.218488 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-13 00:55:09.218494 | orchestrator | Tuesday 13 January 2026 00:44:11 +0000 (0:00:01.503) 0:00:23.859 ******* 2026-01-13 00:55:09.218500 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.218505 | orchestrator | 2026-01-13 00:55:09.218511 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-13 00:55:09.218517 | orchestrator | Tuesday 13 January 2026 00:44:11 +0000 (0:00:00.126) 0:00:23.986 ******* 2026-01-13 00:55:09.218523 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.218529 | orchestrator | 2026-01-13 00:55:09.218534 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-13 00:55:09.218540 | orchestrator | Tuesday 13 January 2026 00:44:12 +0000 (0:00:00.377) 0:00:24.364 ******* 2026-01-13 00:55:09.218546 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.218552 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.218557 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.218581 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.218589 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.218595 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.218602 | orchestrator | 2026-01-13 00:55:09.218609 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-13 00:55:09.218724 | orchestrator | Tuesday 13 January 2026 00:44:13 +0000 (0:00:01.133) 0:00:25.497 ******* 2026-01-13 00:55:09.218872 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.218976 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.218982 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.218987 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.218993 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.218999 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.219005 | orchestrator | 2026-01-13 00:55:09.219010 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-13 00:55:09.219016 | orchestrator | Tuesday 13 January 2026 00:44:14 +0000 (0:00:01.175) 0:00:26.672 ******* 2026-01-13 00:55:09.219022 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.219028 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.219034 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.219039 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.219045 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.219051 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.219057 | orchestrator | 2026-01-13 00:55:09.219063 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-13 00:55:09.219068 | orchestrator | Tuesday 13 January 2026 00:44:15 +0000 (0:00:00.680) 0:00:27.352 ******* 2026-01-13 00:55:09.219074 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.219080 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.219085 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.219091 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.219097 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.219103 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.219108 | orchestrator | 2026-01-13 00:55:09.219114 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-13 00:55:09.219120 | orchestrator | Tuesday 13 January 2026 00:44:16 +0000 (0:00:01.590) 0:00:28.943 ******* 2026-01-13 00:55:09.219126 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.219132 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.219138 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.219144 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.219156 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.219162 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.219168 | orchestrator | 2026-01-13 00:55:09.219174 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-13 00:55:09.219181 | orchestrator | Tuesday 13 January 2026 00:44:17 +0000 (0:00:00.743) 0:00:29.686 ******* 2026-01-13 00:55:09.219186 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.219193 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.219199 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.219205 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.219212 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.219218 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.219224 | orchestrator | 2026-01-13 00:55:09.219231 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-13 00:55:09.219238 | orchestrator | Tuesday 13 January 2026 00:44:18 +0000 (0:00:00.825) 0:00:30.512 ******* 2026-01-13 00:55:09.219393 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.219404 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.219410 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.219417 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.219459 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.219545 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.219551 | orchestrator | 2026-01-13 00:55:09.219555 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-13 00:55:09.219559 | orchestrator | Tuesday 13 January 2026 00:44:19 +0000 (0:00:00.678) 0:00:31.190 ******* 2026-01-13 00:55:09.219564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b9be54a9--cd9c--568c--9220--61b18da052d9-osd--block--b9be54a9--cd9c--568c--9220--61b18da052d9', 'dm-uuid-LVM-tI9LueIqoznnHWvc67dyxcKb2DRlZadxhD8MTBDVbVSuVr75iGA0ykjhJTLhvbvd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.219573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--03961d85--1922--5669--8251--0ccc6cca9fac-osd--block--03961d85--1922--5669--8251--0ccc6cca9fac', 'dm-uuid-LVM-GHCgDfhjqHbxrN6X57Au2JxG0UkZVV6SYAZhc8KzmZuq1WeEWDc3uD3fnm7izynW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.219962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.219978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.219985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.219999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5-osd--block--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5', 'dm-uuid-LVM-hgtH6tpzhnx2QQztd0bAxtrFNuWF2rUJ5NeecY0iboAd4WuXz2J4zhiyU5ciBGer'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b3e8737--91e3--53c0--9b3a--5288a4111b63-osd--block--2b3e8737--91e3--53c0--9b3a--5288a4111b63', 'dm-uuid-LVM-xy47BTMmezzKuhVgeOBsrflxsh2nMMZxq1yfesNVZn38knC8hXtIcPF2l4aSiZtk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part1', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part14', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part15', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part16', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b9be54a9--cd9c--568c--9220--61b18da052d9-osd--block--b9be54a9--cd9c--568c--9220--61b18da052d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sZILOh-5tjd-Njbz-niJz-MLcH-ddwd-N90s5N', 'scsi-0QEMU_QEMU_HARDDISK_49cd33e4-72cd-4f3f-940d-55c9f0f00a98', 'scsi-SQEMU_QEMU_HARDDISK_49cd33e4-72cd-4f3f-940d-55c9f0f00a98'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--03961d85--1922--5669--8251--0ccc6cca9fac-osd--block--03961d85--1922--5669--8251--0ccc6cca9fac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uzsK8R-6Gxn-oDpB-2Hms-tH0u-G7ac-aGEaLg', 'scsi-0QEMU_QEMU_HARDDISK_1f00cc32-4927-4d99-9c1e-b649b1d1f573', 'scsi-SQEMU_QEMU_HARDDISK_1f00cc32-4927-4d99-9c1e-b649b1d1f573'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e91d200a--cf56--55df--b2f8--08f15361112f-osd--block--e91d200a--cf56--55df--b2f8--08f15361112f', 'dm-uuid-LVM-xweh1YC5RDiVWhdx1PKskF5JCr6mh2cIruH8cXC0TzdCdfDxhyAoa4ykUz6BhD3x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a292857-8cd9-4a14-95ba-a5d022f4a90e', 'scsi-SQEMU_QEMU_HARDDISK_0a292857-8cd9-4a14-95ba-a5d022f4a90e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ebda4f6--7b50--59b0--8273--b291dd7d1677-osd--block--7ebda4f6--7b50--59b0--8273--b291dd7d1677', 'dm-uuid-LVM-qXJ0ZdEvWcXk2vDlKmzolqGpgokmwsYUBrLx3bsgLllWFSDGo0grKruMjv28g8BC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7', 'scsi-SQEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part1', 'scsi-SQEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part14', 'scsi-SQEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part15', 'scsi-SQEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part16', 'scsi-SQEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part1', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part14', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part15', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part16', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220571 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.220576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part1', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part14', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part15', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part16', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5-osd--block--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6DKvSU-Cdbw-CbUk-lrwG-gfma-BvTf-I6WE2Y', 'scsi-0QEMU_QEMU_HARDDISK_6ad71b9e-76db-4ac5-b372-050f59253056', 'scsi-SQEMU_QEMU_HARDDISK_6ad71b9e-76db-4ac5-b372-050f59253056'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e91d200a--cf56--55df--b2f8--08f15361112f-osd--block--e91d200a--cf56--55df--b2f8--08f15361112f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FH2r3c-Cf2J-ryeq-ItYe-hsKy-vARI-3t2Zip', 'scsi-0QEMU_QEMU_HARDDISK_79922d84-0445-4535-976b-32e74e35a748', 'scsi-SQEMU_QEMU_HARDDISK_79922d84-0445-4535-976b-32e74e35a748'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2b3e8737--91e3--53c0--9b3a--5288a4111b63-osd--block--2b3e8737--91e3--53c0--9b3a--5288a4111b63'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-U2UOZW-SjBW-buGp-55CV-6Kqk-QEzG-AXKRcv', 'scsi-0QEMU_QEMU_HARDDISK_9db8234e-f6a8-4211-a809-87a509109e78', 'scsi-SQEMU_QEMU_HARDDISK_9db8234e-f6a8-4211-a809-87a509109e78'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7ebda4f6--7b50--59b0--8273--b291dd7d1677-osd--block--7ebda4f6--7b50--59b0--8273--b291dd7d1677'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xHcG0E-vZHx-JCHk-rp13-0i6I-R8mG-hkrVOO', 'scsi-0QEMU_QEMU_HARDDISK_f69e02e7-d854-4ded-bb8d-51d0e0400336', 'scsi-SQEMU_QEMU_HARDDISK_f69e02e7-d854-4ded-bb8d-51d0e0400336'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0bff01-3898-4d25-903e-2ecdf087243c', 'scsi-SQEMU_QEMU_HARDDISK_5c0bff01-3898-4d25-903e-2ecdf087243c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5295d09e-fddd-4452-8a25-9ba23e2b95ae', 'scsi-SQEMU_QEMU_HARDDISK_5295d09e-fddd-4452-8a25-9ba23e2b95ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8', 'scsi-SQEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part1', 'scsi-SQEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part14', 'scsi-SQEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part15', 'scsi-SQEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part16', 'scsi-SQEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220779 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.220783 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.220787 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.220791 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.220795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:55:09.220890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11', 'scsi-SQEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part1', 'scsi-SQEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part14', 'scsi-SQEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part15', 'scsi-SQEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part16', 'scsi-SQEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:55:09.220958 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.220965 | orchestrator | 2026-01-13 00:55:09.220971 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-13 00:55:09.220978 | orchestrator | Tuesday 13 January 2026 00:44:20 +0000 (0:00:01.230) 0:00:32.421 ******* 2026-01-13 00:55:09.220985 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b9be54a9--cd9c--568c--9220--61b18da052d9-osd--block--b9be54a9--cd9c--568c--9220--61b18da052d9', 'dm-uuid-LVM-tI9LueIqoznnHWvc67dyxcKb2DRlZadxhD8MTBDVbVSuVr75iGA0ykjhJTLhvbvd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.220993 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--03961d85--1922--5669--8251--0ccc6cca9fac-osd--block--03961d85--1922--5669--8251--0ccc6cca9fac', 'dm-uuid-LVM-GHCgDfhjqHbxrN6X57Au2JxG0UkZVV6SYAZhc8KzmZuq1WeEWDc3uD3fnm7izynW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.220998 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5-osd--block--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5', 'dm-uuid-LVM-hgtH6tpzhnx2QQztd0bAxtrFNuWF2rUJ5NeecY0iboAd4WuXz2J4zhiyU5ciBGer'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221016 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b3e8737--91e3--53c0--9b3a--5288a4111b63-osd--block--2b3e8737--91e3--53c0--9b3a--5288a4111b63', 'dm-uuid-LVM-xy47BTMmezzKuhVgeOBsrflxsh2nMMZxq1yfesNVZn38knC8hXtIcPF2l4aSiZtk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221050 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221056 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221060 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221065 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221069 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221078 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221084 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221113 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221118 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221125 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221132 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221143 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221157 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221161 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221193 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part1', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part14', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part15', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part16', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221204 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5-osd--block--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6DKvSU-Cdbw-CbUk-lrwG-gfma-BvTf-I6WE2Y', 'scsi-0QEMU_QEMU_HARDDISK_6ad71b9e-76db-4ac5-b372-050f59253056', 'scsi-SQEMU_QEMU_HARDDISK_6ad71b9e-76db-4ac5-b372-050f59253056'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221238 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2b3e8737--91e3--53c0--9b3a--5288a4111b63-osd--block--2b3e8737--91e3--53c0--9b3a--5288a4111b63'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-U2UOZW-SjBW-buGp-55CV-6Kqk-QEzG-AXKRcv', 'scsi-0QEMU_QEMU_HARDDISK_9db8234e-f6a8-4211-a809-87a509109e78', 'scsi-SQEMU_QEMU_HARDDISK_9db8234e-f6a8-4211-a809-87a509109e78'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221243 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221247 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0bff01-3898-4d25-903e-2ecdf087243c', 'scsi-SQEMU_QEMU_HARDDISK_5c0bff01-3898-4d25-903e-2ecdf087243c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221254 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221284 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part1', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part14', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part15', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part16', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221290 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e91d200a--cf56--55df--b2f8--08f15361112f-osd--block--e91d200a--cf56--55df--b2f8--08f15361112f', 'dm-uuid-LVM-xweh1YC5RDiVWhdx1PKskF5JCr6mh2cIruH8cXC0TzdCdfDxhyAoa4ykUz6BhD3x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221294 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b9be54a9--cd9c--568c--9220--61b18da052d9-osd--block--b9be54a9--cd9c--568c--9220--61b18da052d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sZILOh-5tjd-Njbz-niJz-MLcH-ddwd-N90s5N', 'scsi-0QEMU_QEMU_HARDDISK_49cd33e4-72cd-4f3f-940d-55c9f0f00a98', 'scsi-SQEMU_QEMU_HARDDISK_49cd33e4-72cd-4f3f-940d-55c9f0f00a98'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221303 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ebda4f6--7b50--59b0--8273--b291dd7d1677-osd--block--7ebda4f6--7b50--59b0--8273--b291dd7d1677', 'dm-uuid-LVM-qXJ0ZdEvWcXk2vDlKmzolqGpgokmwsYUBrLx3bsgLllWFSDGo0grKruMjv28g8BC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--03961d85--1922--5669--8251--0ccc6cca9fac-osd--block--03961d85--1922--5669--8251--0ccc6cca9fac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uzsK8R-6Gxn-oDpB-2Hms-tH0u-G7ac-aGEaLg', 'scsi-0QEMU_QEMU_HARDDISK_1f00cc32-4927-4d99-9c1e-b649b1d1f573', 'scsi-SQEMU_QEMU_HARDDISK_1f00cc32-4927-4d99-9c1e-b649b1d1f573'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221336 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a292857-8cd9-4a14-95ba-a5d022f4a90e', 'scsi-SQEMU_QEMU_HARDDISK_0a292857-8cd9-4a14-95ba-a5d022f4a90e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221347 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221352 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221359 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221402 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221443 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221452 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221459 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221463 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.221467 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221502 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part1', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part14', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part15', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part16', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221509 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e91d200a--cf56--55df--b2f8--08f15361112f-osd--block--e91d200a--cf56--55df--b2f8--08f15361112f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FH2r3c-Cf2J-ryeq-ItYe-hsKy-vARI-3t2Zip', 'scsi-0QEMU_QEMU_HARDDISK_79922d84-0445-4535-976b-32e74e35a748', 'scsi-SQEMU_QEMU_HARDDISK_79922d84-0445-4535-976b-32e74e35a748'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221516 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7ebda4f6--7b50--59b0--8273--b291dd7d1677-osd--block--7ebda4f6--7b50--59b0--8273--b291dd7d1677'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xHcG0E-vZHx-JCHk-rp13-0i6I-R8mG-hkrVOO', 'scsi-0QEMU_QEMU_HARDDISK_f69e02e7-d854-4ded-bb8d-51d0e0400336', 'scsi-SQEMU_QEMU_HARDDISK_f69e02e7-d854-4ded-bb8d-51d0e0400336'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221522 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221549 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5295d09e-fddd-4452-8a25-9ba23e2b95ae', 'scsi-SQEMU_QEMU_HARDDISK_5295d09e-fddd-4452-8a25-9ba23e2b95ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221555 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221559 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.221563 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221570 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221575 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221584 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221591 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221635 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221642 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221649 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221653 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221657 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221663 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221693 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7', 'scsi-SQEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part1', 'scsi-SQEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part14', 'scsi-SQEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part15', 'scsi-SQEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part16', 'scsi-SQEMU_QEMU_HARDDISK_4dcaa69c-5414-4861-9f75-cc0da42200e7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221703 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221707 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221713 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221742 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221747 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221782 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8', 'scsi-SQEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part1', 'scsi-SQEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part14', 'scsi-SQEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part15', 'scsi-SQEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part16', 'scsi-SQEMU_QEMU_HARDDISK_83689365-4423-433a-82c0-63cbcaedfdf8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221797 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221830 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.221836 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.221840 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.221844 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221851 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221855 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221859 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221863 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221871 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221899 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221907 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221911 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11', 'scsi-SQEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part1', 'scsi-SQEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part14', 'scsi-SQEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part15', 'scsi-SQEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part16', 'scsi-SQEMU_QEMU_HARDDISK_d36bd727-f6fd-4e09-af6c-5d1752a9fb11-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221917 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:55:09.221922 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.221925 | orchestrator | 2026-01-13 00:55:09.221953 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-13 00:55:09.221962 | orchestrator | Tuesday 13 January 2026 00:44:21 +0000 (0:00:01.434) 0:00:33.855 ******* 2026-01-13 00:55:09.221972 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.221984 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.221988 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.221992 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.221995 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.221999 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.222003 | orchestrator | 2026-01-13 00:55:09.222007 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-13 00:55:09.222011 | orchestrator | Tuesday 13 January 2026 00:44:23 +0000 (0:00:01.356) 0:00:35.212 ******* 2026-01-13 00:55:09.222038 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.222045 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.222051 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.222057 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.222063 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.222069 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.222075 | orchestrator | 2026-01-13 00:55:09.222081 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-13 00:55:09.222087 | orchestrator | Tuesday 13 January 2026 00:44:23 +0000 (0:00:00.548) 0:00:35.761 ******* 2026-01-13 00:55:09.222093 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.222099 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.222106 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.222110 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.222113 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.222117 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.222121 | orchestrator | 2026-01-13 00:55:09.222126 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-13 00:55:09.222133 | orchestrator | Tuesday 13 January 2026 00:44:24 +0000 (0:00:01.247) 0:00:37.009 ******* 2026-01-13 00:55:09.222139 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.222145 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.222149 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.222153 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.222163 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.222167 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.222171 | orchestrator | 2026-01-13 00:55:09.222174 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-13 00:55:09.222178 | orchestrator | Tuesday 13 January 2026 00:44:25 +0000 (0:00:00.834) 0:00:37.843 ******* 2026-01-13 00:55:09.222182 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.222185 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.222189 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.222193 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.222197 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.222200 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.222204 | orchestrator | 2026-01-13 00:55:09.222208 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-13 00:55:09.222212 | orchestrator | Tuesday 13 January 2026 00:44:26 +0000 (0:00:00.916) 0:00:38.760 ******* 2026-01-13 00:55:09.222215 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.222219 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.222223 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.222226 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.222230 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.222234 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.222237 | orchestrator | 2026-01-13 00:55:09.222243 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-13 00:55:09.222250 | orchestrator | Tuesday 13 January 2026 00:44:27 +0000 (0:00:00.852) 0:00:39.613 ******* 2026-01-13 00:55:09.222257 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-13 00:55:09.222261 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-13 00:55:09.222282 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-13 00:55:09.222292 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-13 00:55:09.222295 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-13 00:55:09.222299 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-13 00:55:09.222303 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-13 00:55:09.222306 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-13 00:55:09.222310 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-13 00:55:09.222314 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-13 00:55:09.222317 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-13 00:55:09.222321 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-13 00:55:09.222325 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-13 00:55:09.222328 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-13 00:55:09.222332 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-13 00:55:09.222336 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-13 00:55:09.222342 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-13 00:55:09.222345 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-13 00:55:09.222349 | orchestrator | 2026-01-13 00:55:09.222353 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-13 00:55:09.222356 | orchestrator | Tuesday 13 January 2026 00:44:32 +0000 (0:00:05.016) 0:00:44.630 ******* 2026-01-13 00:55:09.222360 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-13 00:55:09.222364 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-13 00:55:09.222368 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-13 00:55:09.222371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-13 00:55:09.222375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-13 00:55:09.222379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-13 00:55:09.222382 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.222386 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.222390 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-13 00:55:09.222413 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-13 00:55:09.222418 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-13 00:55:09.222421 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-13 00:55:09.222425 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-13 00:55:09.222429 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.222432 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-13 00:55:09.222436 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-13 00:55:09.222440 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-13 00:55:09.222443 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-13 00:55:09.222447 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.222451 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.222454 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-13 00:55:09.222458 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-13 00:55:09.222462 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-13 00:55:09.222465 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.222469 | orchestrator | 2026-01-13 00:55:09.222473 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-13 00:55:09.222477 | orchestrator | Tuesday 13 January 2026 00:44:33 +0000 (0:00:01.108) 0:00:45.738 ******* 2026-01-13 00:55:09.222480 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.222484 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.222488 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.222492 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.222498 | orchestrator | 2026-01-13 00:55:09.222502 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-13 00:55:09.222506 | orchestrator | Tuesday 13 January 2026 00:44:34 +0000 (0:00:01.119) 0:00:46.858 ******* 2026-01-13 00:55:09.222510 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.222513 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.222517 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.222521 | orchestrator | 2026-01-13 00:55:09.222524 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-13 00:55:09.222528 | orchestrator | Tuesday 13 January 2026 00:44:35 +0000 (0:00:00.611) 0:00:47.469 ******* 2026-01-13 00:55:09.222532 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.222536 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.222539 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.222543 | orchestrator | 2026-01-13 00:55:09.222547 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-13 00:55:09.222551 | orchestrator | Tuesday 13 January 2026 00:44:36 +0000 (0:00:00.647) 0:00:48.117 ******* 2026-01-13 00:55:09.222554 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.222558 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.222562 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.222565 | orchestrator | 2026-01-13 00:55:09.222569 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-13 00:55:09.222573 | orchestrator | Tuesday 13 January 2026 00:44:36 +0000 (0:00:00.596) 0:00:48.713 ******* 2026-01-13 00:55:09.222577 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.222580 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.222584 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.222588 | orchestrator | 2026-01-13 00:55:09.222592 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-13 00:55:09.222595 | orchestrator | Tuesday 13 January 2026 00:44:37 +0000 (0:00:00.847) 0:00:49.561 ******* 2026-01-13 00:55:09.222600 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:55:09.222605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:55:09.222609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:55:09.222613 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.222617 | orchestrator | 2026-01-13 00:55:09.222622 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-13 00:55:09.222626 | orchestrator | Tuesday 13 January 2026 00:44:37 +0000 (0:00:00.328) 0:00:49.889 ******* 2026-01-13 00:55:09.222630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:55:09.222634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:55:09.222639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:55:09.222643 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.222647 | orchestrator | 2026-01-13 00:55:09.222652 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-13 00:55:09.222658 | orchestrator | Tuesday 13 January 2026 00:44:38 +0000 (0:00:00.349) 0:00:50.239 ******* 2026-01-13 00:55:09.222662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:55:09.222666 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:55:09.222671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:55:09.222675 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.222680 | orchestrator | 2026-01-13 00:55:09.222684 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-13 00:55:09.222689 | orchestrator | Tuesday 13 January 2026 00:44:38 +0000 (0:00:00.477) 0:00:50.716 ******* 2026-01-13 00:55:09.222693 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.222697 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.222704 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.222708 | orchestrator | 2026-01-13 00:55:09.222713 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-13 00:55:09.222717 | orchestrator | Tuesday 13 January 2026 00:44:39 +0000 (0:00:00.382) 0:00:51.099 ******* 2026-01-13 00:55:09.222721 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-13 00:55:09.222725 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-13 00:55:09.222741 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-13 00:55:09.222746 | orchestrator | 2026-01-13 00:55:09.222780 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-13 00:55:09.222785 | orchestrator | Tuesday 13 January 2026 00:44:39 +0000 (0:00:00.883) 0:00:51.982 ******* 2026-01-13 00:55:09.222789 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-13 00:55:09.222794 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-13 00:55:09.222798 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-13 00:55:09.222803 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-13 00:55:09.222807 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-13 00:55:09.222811 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-13 00:55:09.222815 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-13 00:55:09.222819 | orchestrator | 2026-01-13 00:55:09.222824 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-13 00:55:09.222828 | orchestrator | Tuesday 13 January 2026 00:44:40 +0000 (0:00:00.984) 0:00:52.967 ******* 2026-01-13 00:55:09.222832 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-13 00:55:09.222837 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-13 00:55:09.222841 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-13 00:55:09.222845 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-13 00:55:09.222850 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-13 00:55:09.222854 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-13 00:55:09.222858 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-13 00:55:09.222862 | orchestrator | 2026-01-13 00:55:09.222866 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-13 00:55:09.222871 | orchestrator | Tuesday 13 January 2026 00:44:43 +0000 (0:00:02.195) 0:00:55.162 ******* 2026-01-13 00:55:09.222875 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.222881 | orchestrator | 2026-01-13 00:55:09.222885 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-13 00:55:09.222889 | orchestrator | Tuesday 13 January 2026 00:44:44 +0000 (0:00:01.695) 0:00:56.858 ******* 2026-01-13 00:55:09.222893 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.222898 | orchestrator | 2026-01-13 00:55:09.222902 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-13 00:55:09.222906 | orchestrator | Tuesday 13 January 2026 00:44:46 +0000 (0:00:01.648) 0:00:58.506 ******* 2026-01-13 00:55:09.222910 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.222914 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.222918 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.222923 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.222930 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.222935 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.222939 | orchestrator | 2026-01-13 00:55:09.222944 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-13 00:55:09.222948 | orchestrator | Tuesday 13 January 2026 00:44:48 +0000 (0:00:01.632) 0:01:00.139 ******* 2026-01-13 00:55:09.222952 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.222957 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.222961 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.222965 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.222969 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.222973 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.222977 | orchestrator | 2026-01-13 00:55:09.222981 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-13 00:55:09.222984 | orchestrator | Tuesday 13 January 2026 00:44:49 +0000 (0:00:01.005) 0:01:01.144 ******* 2026-01-13 00:55:09.222988 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.222992 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.222996 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.222999 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.223005 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.223009 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.223013 | orchestrator | 2026-01-13 00:55:09.223017 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-13 00:55:09.223020 | orchestrator | Tuesday 13 January 2026 00:44:50 +0000 (0:00:01.056) 0:01:02.200 ******* 2026-01-13 00:55:09.223024 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.223028 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.223032 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.223035 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.223039 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.223043 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.223047 | orchestrator | 2026-01-13 00:55:09.223050 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-13 00:55:09.223054 | orchestrator | Tuesday 13 January 2026 00:44:50 +0000 (0:00:00.852) 0:01:03.053 ******* 2026-01-13 00:55:09.223058 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.223062 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.223065 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.223069 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.223073 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.223090 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.223095 | orchestrator | 2026-01-13 00:55:09.223099 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-13 00:55:09.223103 | orchestrator | Tuesday 13 January 2026 00:44:52 +0000 (0:00:01.470) 0:01:04.523 ******* 2026-01-13 00:55:09.223106 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.223110 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.223114 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.223118 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.223121 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.223125 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.223129 | orchestrator | 2026-01-13 00:55:09.223133 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-13 00:55:09.223136 | orchestrator | Tuesday 13 January 2026 00:44:52 +0000 (0:00:00.544) 0:01:05.067 ******* 2026-01-13 00:55:09.223140 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.223144 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.223148 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.223151 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.223155 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.223159 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.223162 | orchestrator | 2026-01-13 00:55:09.223166 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-13 00:55:09.223172 | orchestrator | Tuesday 13 January 2026 00:44:53 +0000 (0:00:00.802) 0:01:05.870 ******* 2026-01-13 00:55:09.223176 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.223180 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.223184 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.223187 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.223191 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.223195 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.223198 | orchestrator | 2026-01-13 00:55:09.223202 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-13 00:55:09.223206 | orchestrator | Tuesday 13 January 2026 00:44:54 +0000 (0:00:01.099) 0:01:06.970 ******* 2026-01-13 00:55:09.223210 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.223214 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.223217 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.223221 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.223225 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.223228 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.223232 | orchestrator | 2026-01-13 00:55:09.223236 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-13 00:55:09.223239 | orchestrator | Tuesday 13 January 2026 00:44:56 +0000 (0:00:01.579) 0:01:08.549 ******* 2026-01-13 00:55:09.223243 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.223247 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.223251 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.223254 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.223258 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.223262 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.223265 | orchestrator | 2026-01-13 00:55:09.223269 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-13 00:55:09.223273 | orchestrator | Tuesday 13 January 2026 00:44:57 +0000 (0:00:00.635) 0:01:09.185 ******* 2026-01-13 00:55:09.223277 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.223280 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.223284 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.223288 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.223291 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.223295 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.223299 | orchestrator | 2026-01-13 00:55:09.223303 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-13 00:55:09.223306 | orchestrator | Tuesday 13 January 2026 00:44:58 +0000 (0:00:01.001) 0:01:10.186 ******* 2026-01-13 00:55:09.223310 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.223314 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.223318 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.223321 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.223325 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.223329 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.223333 | orchestrator | 2026-01-13 00:55:09.223336 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-13 00:55:09.223340 | orchestrator | Tuesday 13 January 2026 00:44:58 +0000 (0:00:00.630) 0:01:10.817 ******* 2026-01-13 00:55:09.223344 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.223347 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.223351 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.223355 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.223359 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.223362 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.223366 | orchestrator | 2026-01-13 00:55:09.223370 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-13 00:55:09.223373 | orchestrator | Tuesday 13 January 2026 00:44:59 +0000 (0:00:00.921) 0:01:11.739 ******* 2026-01-13 00:55:09.223377 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.223381 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.223385 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.223402 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.223406 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.223410 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.223414 | orchestrator | 2026-01-13 00:55:09.223418 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-13 00:55:09.223421 | orchestrator | Tuesday 13 January 2026 00:45:00 +0000 (0:00:00.679) 0:01:12.418 ******* 2026-01-13 00:55:09.223425 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.223429 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.223433 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.223436 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.223440 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.223444 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.223448 | orchestrator | 2026-01-13 00:55:09.223451 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-13 00:55:09.223455 | orchestrator | Tuesday 13 January 2026 00:45:01 +0000 (0:00:01.113) 0:01:13.532 ******* 2026-01-13 00:55:09.223459 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.223463 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.223466 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.223470 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.223486 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.223491 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.223494 | orchestrator | 2026-01-13 00:55:09.223498 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-13 00:55:09.223502 | orchestrator | Tuesday 13 January 2026 00:45:02 +0000 (0:00:00.854) 0:01:14.386 ******* 2026-01-13 00:55:09.223506 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.223510 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.223513 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.223517 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.223521 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.223524 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.223528 | orchestrator | 2026-01-13 00:55:09.223532 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-13 00:55:09.223536 | orchestrator | Tuesday 13 January 2026 00:45:03 +0000 (0:00:01.465) 0:01:15.851 ******* 2026-01-13 00:55:09.223540 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.223543 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.223547 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.223551 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.223554 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.223558 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.223562 | orchestrator | 2026-01-13 00:55:09.223566 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-13 00:55:09.223569 | orchestrator | Tuesday 13 January 2026 00:45:04 +0000 (0:00:00.649) 0:01:16.500 ******* 2026-01-13 00:55:09.223573 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.223577 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.223580 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.223584 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.223588 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.223591 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.223595 | orchestrator | 2026-01-13 00:55:09.223599 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-13 00:55:09.223603 | orchestrator | Tuesday 13 January 2026 00:45:05 +0000 (0:00:01.315) 0:01:17.816 ******* 2026-01-13 00:55:09.223606 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.223610 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.223614 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.223618 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.223621 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.223628 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.223635 | orchestrator | 2026-01-13 00:55:09.223644 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-13 00:55:09.223655 | orchestrator | Tuesday 13 January 2026 00:45:07 +0000 (0:00:01.946) 0:01:19.762 ******* 2026-01-13 00:55:09.223661 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.223667 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.223673 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.223679 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.223684 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.223690 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.223697 | orchestrator | 2026-01-13 00:55:09.223704 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-13 00:55:09.223710 | orchestrator | Tuesday 13 January 2026 00:45:10 +0000 (0:00:03.006) 0:01:22.768 ******* 2026-01-13 00:55:09.223716 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.223722 | orchestrator | 2026-01-13 00:55:09.223728 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-13 00:55:09.223734 | orchestrator | Tuesday 13 January 2026 00:45:11 +0000 (0:00:01.013) 0:01:23.781 ******* 2026-01-13 00:55:09.223740 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.223746 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.223766 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.223772 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.223778 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.223784 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.223789 | orchestrator | 2026-01-13 00:55:09.223795 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-13 00:55:09.223801 | orchestrator | Tuesday 13 January 2026 00:45:12 +0000 (0:00:00.648) 0:01:24.430 ******* 2026-01-13 00:55:09.223807 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.223813 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.223820 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.223826 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.223832 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.223838 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.223844 | orchestrator | 2026-01-13 00:55:09.223850 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-13 00:55:09.223855 | orchestrator | Tuesday 13 January 2026 00:45:13 +0000 (0:00:00.779) 0:01:25.210 ******* 2026-01-13 00:55:09.223862 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-13 00:55:09.223874 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-13 00:55:09.223881 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-13 00:55:09.223887 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-13 00:55:09.223894 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-13 00:55:09.223900 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-13 00:55:09.223905 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-13 00:55:09.223911 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-13 00:55:09.223916 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-13 00:55:09.223921 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-13 00:55:09.223951 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-13 00:55:09.223957 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-13 00:55:09.223962 | orchestrator | 2026-01-13 00:55:09.223968 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-13 00:55:09.223978 | orchestrator | Tuesday 13 January 2026 00:45:14 +0000 (0:00:01.510) 0:01:26.720 ******* 2026-01-13 00:55:09.223983 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.223989 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.223994 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.223999 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.224005 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.224011 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.224017 | orchestrator | 2026-01-13 00:55:09.224023 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-13 00:55:09.224029 | orchestrator | Tuesday 13 January 2026 00:45:16 +0000 (0:00:02.169) 0:01:28.889 ******* 2026-01-13 00:55:09.224035 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.224041 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.224047 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.224053 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.224059 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.224065 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.224072 | orchestrator | 2026-01-13 00:55:09.224078 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-13 00:55:09.224084 | orchestrator | Tuesday 13 January 2026 00:45:17 +0000 (0:00:01.037) 0:01:29.927 ******* 2026-01-13 00:55:09.224091 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.224097 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.224104 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.224110 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.224116 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.224123 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.224129 | orchestrator | 2026-01-13 00:55:09.224136 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-13 00:55:09.224143 | orchestrator | Tuesday 13 January 2026 00:45:18 +0000 (0:00:00.968) 0:01:30.895 ******* 2026-01-13 00:55:09.224149 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.224155 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.224161 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.224167 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.224173 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.224179 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.224185 | orchestrator | 2026-01-13 00:55:09.224191 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-13 00:55:09.224197 | orchestrator | Tuesday 13 January 2026 00:45:19 +0000 (0:00:00.561) 0:01:31.457 ******* 2026-01-13 00:55:09.224204 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.224211 | orchestrator | 2026-01-13 00:55:09.224218 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-13 00:55:09.224224 | orchestrator | Tuesday 13 January 2026 00:45:20 +0000 (0:00:01.058) 0:01:32.516 ******* 2026-01-13 00:55:09.224231 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.224238 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.224244 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.224250 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.224256 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.224262 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.224268 | orchestrator | 2026-01-13 00:55:09.224275 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-13 00:55:09.224282 | orchestrator | Tuesday 13 January 2026 00:46:21 +0000 (0:01:00.887) 0:02:33.404 ******* 2026-01-13 00:55:09.224288 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-13 00:55:09.224294 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-13 00:55:09.224301 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-13 00:55:09.224313 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.224320 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-13 00:55:09.224326 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-13 00:55:09.224332 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-13 00:55:09.224337 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.224343 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-13 00:55:09.224353 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-13 00:55:09.224359 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-13 00:55:09.224365 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.224371 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-13 00:55:09.224377 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-13 00:55:09.224383 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-13 00:55:09.224388 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.224394 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-13 00:55:09.224400 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-13 00:55:09.224406 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-13 00:55:09.224412 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.224443 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-13 00:55:09.224451 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-13 00:55:09.224457 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-13 00:55:09.224463 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.224470 | orchestrator | 2026-01-13 00:55:09.224476 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-13 00:55:09.224482 | orchestrator | Tuesday 13 January 2026 00:46:22 +0000 (0:00:00.950) 0:02:34.354 ******* 2026-01-13 00:55:09.224488 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.224495 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.224501 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.224508 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.224515 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.224521 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.224528 | orchestrator | 2026-01-13 00:55:09.224535 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-13 00:55:09.224542 | orchestrator | Tuesday 13 January 2026 00:46:23 +0000 (0:00:00.968) 0:02:35.323 ******* 2026-01-13 00:55:09.224548 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.224555 | orchestrator | 2026-01-13 00:55:09.224562 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-13 00:55:09.224568 | orchestrator | Tuesday 13 January 2026 00:46:23 +0000 (0:00:00.147) 0:02:35.471 ******* 2026-01-13 00:55:09.224574 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.224580 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.224587 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.224593 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.224599 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.224605 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.224612 | orchestrator | 2026-01-13 00:55:09.224617 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-13 00:55:09.224623 | orchestrator | Tuesday 13 January 2026 00:46:23 +0000 (0:00:00.606) 0:02:36.077 ******* 2026-01-13 00:55:09.224630 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.224636 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.224647 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.224653 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.224659 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.224665 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.224672 | orchestrator | 2026-01-13 00:55:09.224678 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-13 00:55:09.224684 | orchestrator | Tuesday 13 January 2026 00:46:24 +0000 (0:00:00.786) 0:02:36.864 ******* 2026-01-13 00:55:09.224690 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.224696 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.224702 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.224709 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.224715 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.224721 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.224727 | orchestrator | 2026-01-13 00:55:09.224734 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-13 00:55:09.224740 | orchestrator | Tuesday 13 January 2026 00:46:25 +0000 (0:00:00.590) 0:02:37.455 ******* 2026-01-13 00:55:09.224747 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.224785 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.224792 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.224798 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.224804 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.224810 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.224816 | orchestrator | 2026-01-13 00:55:09.224822 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-13 00:55:09.224829 | orchestrator | Tuesday 13 January 2026 00:46:27 +0000 (0:00:02.165) 0:02:39.620 ******* 2026-01-13 00:55:09.224835 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.224841 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.224847 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.224852 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.224859 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.224865 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.224871 | orchestrator | 2026-01-13 00:55:09.224878 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-13 00:55:09.224884 | orchestrator | Tuesday 13 January 2026 00:46:27 +0000 (0:00:00.472) 0:02:40.093 ******* 2026-01-13 00:55:09.224891 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.224898 | orchestrator | 2026-01-13 00:55:09.224905 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-13 00:55:09.224911 | orchestrator | Tuesday 13 January 2026 00:46:28 +0000 (0:00:00.979) 0:02:41.072 ******* 2026-01-13 00:55:09.224918 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.224924 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.224934 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.224940 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.224947 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.224953 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.224959 | orchestrator | 2026-01-13 00:55:09.224965 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-13 00:55:09.224972 | orchestrator | Tuesday 13 January 2026 00:46:29 +0000 (0:00:00.754) 0:02:41.827 ******* 2026-01-13 00:55:09.224979 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.224985 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.224991 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.224998 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.225004 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.225011 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.225017 | orchestrator | 2026-01-13 00:55:09.225024 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-13 00:55:09.225036 | orchestrator | Tuesday 13 January 2026 00:46:30 +0000 (0:00:00.597) 0:02:42.424 ******* 2026-01-13 00:55:09.225043 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.225050 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.225086 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.225094 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.225100 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.225105 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.225111 | orchestrator | 2026-01-13 00:55:09.225117 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-13 00:55:09.225124 | orchestrator | Tuesday 13 January 2026 00:46:30 +0000 (0:00:00.645) 0:02:43.070 ******* 2026-01-13 00:55:09.225130 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.225136 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.225143 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.225149 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.225155 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.225159 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.225163 | orchestrator | 2026-01-13 00:55:09.225167 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-13 00:55:09.225170 | orchestrator | Tuesday 13 January 2026 00:46:31 +0000 (0:00:00.645) 0:02:43.715 ******* 2026-01-13 00:55:09.225174 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.225178 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.225181 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.225185 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.225189 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.225192 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.225196 | orchestrator | 2026-01-13 00:55:09.225200 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-13 00:55:09.225204 | orchestrator | Tuesday 13 January 2026 00:46:32 +0000 (0:00:00.679) 0:02:44.395 ******* 2026-01-13 00:55:09.225207 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.225211 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.225214 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.225218 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.225222 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.225225 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.225229 | orchestrator | 2026-01-13 00:55:09.225233 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-13 00:55:09.225236 | orchestrator | Tuesday 13 January 2026 00:46:32 +0000 (0:00:00.652) 0:02:45.047 ******* 2026-01-13 00:55:09.225241 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.225248 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.225254 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.225260 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.225266 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.225272 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.225278 | orchestrator | 2026-01-13 00:55:09.225284 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-13 00:55:09.225291 | orchestrator | Tuesday 13 January 2026 00:46:33 +0000 (0:00:00.710) 0:02:45.757 ******* 2026-01-13 00:55:09.225341 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.225346 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.225349 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.225353 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.225357 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.225361 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.225364 | orchestrator | 2026-01-13 00:55:09.225368 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-13 00:55:09.225372 | orchestrator | Tuesday 13 January 2026 00:46:34 +0000 (0:00:00.558) 0:02:46.316 ******* 2026-01-13 00:55:09.225375 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.225380 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.225389 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.225392 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.225396 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.225400 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.225403 | orchestrator | 2026-01-13 00:55:09.225407 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-13 00:55:09.225411 | orchestrator | Tuesday 13 January 2026 00:46:35 +0000 (0:00:01.010) 0:02:47.326 ******* 2026-01-13 00:55:09.225415 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.225419 | orchestrator | 2026-01-13 00:55:09.225423 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-13 00:55:09.225427 | orchestrator | Tuesday 13 January 2026 00:46:36 +0000 (0:00:01.087) 0:02:48.414 ******* 2026-01-13 00:55:09.225431 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-13 00:55:09.225435 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-13 00:55:09.225438 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-13 00:55:09.225442 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-13 00:55:09.225446 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-13 00:55:09.225450 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-13 00:55:09.225456 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-13 00:55:09.225460 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-13 00:55:09.225464 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-13 00:55:09.225467 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-13 00:55:09.225471 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-13 00:55:09.225475 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-13 00:55:09.225479 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-13 00:55:09.225482 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-13 00:55:09.225486 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-13 00:55:09.225490 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-13 00:55:09.225493 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-13 00:55:09.225497 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-13 00:55:09.225521 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-13 00:55:09.225525 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-13 00:55:09.225529 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-13 00:55:09.225533 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-13 00:55:09.225536 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-13 00:55:09.225540 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-13 00:55:09.225544 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-13 00:55:09.225547 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-13 00:55:09.225551 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-13 00:55:09.225555 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-13 00:55:09.225559 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-13 00:55:09.225562 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-13 00:55:09.225566 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-13 00:55:09.225570 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-13 00:55:09.225573 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-13 00:55:09.225577 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-13 00:55:09.225584 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-13 00:55:09.225588 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-13 00:55:09.225592 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-13 00:55:09.225596 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-13 00:55:09.225599 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-13 00:55:09.225603 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-13 00:55:09.225607 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-13 00:55:09.225610 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-13 00:55:09.225614 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-13 00:55:09.225618 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-13 00:55:09.225621 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-13 00:55:09.225625 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-13 00:55:09.225629 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-13 00:55:09.225632 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-13 00:55:09.225636 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-13 00:55:09.225640 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-13 00:55:09.225643 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-13 00:55:09.225647 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-13 00:55:09.225651 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-13 00:55:09.225654 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-13 00:55:09.225658 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-13 00:55:09.225662 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-13 00:55:09.225666 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-13 00:55:09.225669 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-13 00:55:09.225673 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-13 00:55:09.225677 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-13 00:55:09.225680 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-13 00:55:09.225684 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-13 00:55:09.225688 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-13 00:55:09.225691 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-13 00:55:09.225695 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-13 00:55:09.225699 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-13 00:55:09.225704 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-13 00:55:09.225708 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-13 00:55:09.225712 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-13 00:55:09.225716 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-13 00:55:09.225719 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-13 00:55:09.225723 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-13 00:55:09.225727 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-13 00:55:09.225730 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-13 00:55:09.225737 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-13 00:55:09.225741 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-13 00:55:09.225770 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-13 00:55:09.225778 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-13 00:55:09.225785 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-13 00:55:09.225789 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-13 00:55:09.225793 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-13 00:55:09.225796 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-13 00:55:09.225800 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-13 00:55:09.225804 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-13 00:55:09.225808 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-13 00:55:09.225811 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-13 00:55:09.225815 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-13 00:55:09.225819 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-13 00:55:09.225823 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-13 00:55:09.225826 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-13 00:55:09.225830 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-13 00:55:09.225834 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-13 00:55:09.225838 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-13 00:55:09.225841 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-13 00:55:09.225845 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-13 00:55:09.225849 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-13 00:55:09.225853 | orchestrator | 2026-01-13 00:55:09.225857 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-13 00:55:09.225860 | orchestrator | Tuesday 13 January 2026 00:46:43 +0000 (0:00:07.433) 0:02:55.847 ******* 2026-01-13 00:55:09.225864 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.225868 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.225872 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.225876 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.225880 | orchestrator | 2026-01-13 00:55:09.225883 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-13 00:55:09.225887 | orchestrator | Tuesday 13 January 2026 00:46:44 +0000 (0:00:00.840) 0:02:56.687 ******* 2026-01-13 00:55:09.225891 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.225895 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.225899 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.225903 | orchestrator | 2026-01-13 00:55:09.225906 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-13 00:55:09.225910 | orchestrator | Tuesday 13 January 2026 00:46:45 +0000 (0:00:00.844) 0:02:57.532 ******* 2026-01-13 00:55:09.225914 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.225918 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.225924 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.225928 | orchestrator | 2026-01-13 00:55:09.225932 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-13 00:55:09.225936 | orchestrator | Tuesday 13 January 2026 00:46:46 +0000 (0:00:01.323) 0:02:58.855 ******* 2026-01-13 00:55:09.225939 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.225943 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.225947 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.225951 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.225954 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.225958 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.225962 | orchestrator | 2026-01-13 00:55:09.225968 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-13 00:55:09.225972 | orchestrator | Tuesday 13 January 2026 00:46:47 +0000 (0:00:00.569) 0:02:59.425 ******* 2026-01-13 00:55:09.225976 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.225979 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.225983 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.225987 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.225990 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.225994 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.225998 | orchestrator | 2026-01-13 00:55:09.226002 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-13 00:55:09.226005 | orchestrator | Tuesday 13 January 2026 00:46:48 +0000 (0:00:00.745) 0:03:00.170 ******* 2026-01-13 00:55:09.226188 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226193 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.226197 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.226200 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226204 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226208 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226212 | orchestrator | 2026-01-13 00:55:09.226233 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-13 00:55:09.226238 | orchestrator | Tuesday 13 January 2026 00:46:48 +0000 (0:00:00.603) 0:03:00.774 ******* 2026-01-13 00:55:09.226244 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226250 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.226257 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.226263 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226270 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226277 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226284 | orchestrator | 2026-01-13 00:55:09.226291 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-13 00:55:09.226297 | orchestrator | Tuesday 13 January 2026 00:46:49 +0000 (0:00:00.736) 0:03:01.510 ******* 2026-01-13 00:55:09.226303 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226306 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.226310 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.226314 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226317 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226321 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226325 | orchestrator | 2026-01-13 00:55:09.226329 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-13 00:55:09.226333 | orchestrator | Tuesday 13 January 2026 00:46:50 +0000 (0:00:00.626) 0:03:02.137 ******* 2026-01-13 00:55:09.226337 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226341 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.226344 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.226348 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226352 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226355 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226363 | orchestrator | 2026-01-13 00:55:09.226367 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-13 00:55:09.226371 | orchestrator | Tuesday 13 January 2026 00:46:51 +0000 (0:00:01.146) 0:03:03.283 ******* 2026-01-13 00:55:09.226375 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226378 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.226382 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.226386 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226390 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226393 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226397 | orchestrator | 2026-01-13 00:55:09.226401 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-13 00:55:09.226405 | orchestrator | Tuesday 13 January 2026 00:46:51 +0000 (0:00:00.512) 0:03:03.795 ******* 2026-01-13 00:55:09.226408 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226412 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.226416 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.226419 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226423 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226427 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226430 | orchestrator | 2026-01-13 00:55:09.226434 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-13 00:55:09.226438 | orchestrator | Tuesday 13 January 2026 00:46:52 +0000 (0:00:00.793) 0:03:04.589 ******* 2026-01-13 00:55:09.226442 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226445 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226449 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226453 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.226457 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.226461 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.226464 | orchestrator | 2026-01-13 00:55:09.226468 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-13 00:55:09.226472 | orchestrator | Tuesday 13 January 2026 00:46:55 +0000 (0:00:02.915) 0:03:07.504 ******* 2026-01-13 00:55:09.226476 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.226479 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.226483 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.226487 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226490 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226494 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226498 | orchestrator | 2026-01-13 00:55:09.226502 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-13 00:55:09.226506 | orchestrator | Tuesday 13 January 2026 00:46:56 +0000 (0:00:01.085) 0:03:08.590 ******* 2026-01-13 00:55:09.226509 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.226513 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.226517 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.226520 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226524 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226528 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226531 | orchestrator | 2026-01-13 00:55:09.226535 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-13 00:55:09.226539 | orchestrator | Tuesday 13 January 2026 00:46:57 +0000 (0:00:00.843) 0:03:09.433 ******* 2026-01-13 00:55:09.226548 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226552 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.226555 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.226559 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226563 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226566 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226570 | orchestrator | 2026-01-13 00:55:09.226574 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-13 00:55:09.226578 | orchestrator | Tuesday 13 January 2026 00:46:58 +0000 (0:00:01.148) 0:03:10.582 ******* 2026-01-13 00:55:09.226584 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.226588 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.226591 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.226595 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226613 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226617 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226621 | orchestrator | 2026-01-13 00:55:09.226625 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-13 00:55:09.226629 | orchestrator | Tuesday 13 January 2026 00:46:59 +0000 (0:00:00.653) 0:03:11.235 ******* 2026-01-13 00:55:09.226633 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-13 00:55:09.226638 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-13 00:55:09.226643 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226646 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-13 00:55:09.226650 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-13 00:55:09.226654 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.226658 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-13 00:55:09.226662 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-13 00:55:09.226666 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.226670 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226673 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226677 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226681 | orchestrator | 2026-01-13 00:55:09.226684 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-13 00:55:09.226688 | orchestrator | Tuesday 13 January 2026 00:46:59 +0000 (0:00:00.811) 0:03:12.047 ******* 2026-01-13 00:55:09.226692 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226696 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.226699 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.226703 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226707 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226711 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226717 | orchestrator | 2026-01-13 00:55:09.226720 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-13 00:55:09.226724 | orchestrator | Tuesday 13 January 2026 00:47:00 +0000 (0:00:00.499) 0:03:12.546 ******* 2026-01-13 00:55:09.226728 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226732 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.226735 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.226739 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226743 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226746 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226764 | orchestrator | 2026-01-13 00:55:09.226769 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-13 00:55:09.226776 | orchestrator | Tuesday 13 January 2026 00:47:01 +0000 (0:00:00.698) 0:03:13.245 ******* 2026-01-13 00:55:09.226780 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226784 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.226789 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.226793 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226798 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226802 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226806 | orchestrator | 2026-01-13 00:55:09.226810 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-13 00:55:09.226815 | orchestrator | Tuesday 13 January 2026 00:47:01 +0000 (0:00:00.632) 0:03:13.877 ******* 2026-01-13 00:55:09.226819 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226823 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.226827 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226832 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.226836 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226840 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226844 | orchestrator | 2026-01-13 00:55:09.226849 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-13 00:55:09.226865 | orchestrator | Tuesday 13 January 2026 00:47:02 +0000 (0:00:00.786) 0:03:14.663 ******* 2026-01-13 00:55:09.226870 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226875 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.226879 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.226883 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226887 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226892 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226896 | orchestrator | 2026-01-13 00:55:09.226900 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-13 00:55:09.226905 | orchestrator | Tuesday 13 January 2026 00:47:03 +0000 (0:00:00.527) 0:03:15.191 ******* 2026-01-13 00:55:09.226909 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.226913 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.226917 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.226921 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.226926 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.226930 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.226935 | orchestrator | 2026-01-13 00:55:09.226939 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-13 00:55:09.226943 | orchestrator | Tuesday 13 January 2026 00:47:03 +0000 (0:00:00.756) 0:03:15.947 ******* 2026-01-13 00:55:09.226948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:55:09.226951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:55:09.226955 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:55:09.226959 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226963 | orchestrator | 2026-01-13 00:55:09.226966 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-13 00:55:09.226973 | orchestrator | Tuesday 13 January 2026 00:47:04 +0000 (0:00:00.359) 0:03:16.307 ******* 2026-01-13 00:55:09.226977 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:55:09.226980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:55:09.226984 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:55:09.226988 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.226992 | orchestrator | 2026-01-13 00:55:09.226995 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-13 00:55:09.226999 | orchestrator | Tuesday 13 January 2026 00:47:04 +0000 (0:00:00.356) 0:03:16.663 ******* 2026-01-13 00:55:09.227003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:55:09.227007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:55:09.227010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:55:09.227014 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227018 | orchestrator | 2026-01-13 00:55:09.227021 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-13 00:55:09.227025 | orchestrator | Tuesday 13 January 2026 00:47:04 +0000 (0:00:00.337) 0:03:17.000 ******* 2026-01-13 00:55:09.227029 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.227033 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.227036 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.227040 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.227044 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.227047 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.227051 | orchestrator | 2026-01-13 00:55:09.227055 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-13 00:55:09.227058 | orchestrator | Tuesday 13 January 2026 00:47:05 +0000 (0:00:00.500) 0:03:17.501 ******* 2026-01-13 00:55:09.227062 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-13 00:55:09.227066 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-13 00:55:09.227070 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-13 00:55:09.227073 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-13 00:55:09.227077 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.227081 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-13 00:55:09.227085 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.227088 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-13 00:55:09.227092 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.227096 | orchestrator | 2026-01-13 00:55:09.227099 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-13 00:55:09.227103 | orchestrator | Tuesday 13 January 2026 00:47:06 +0000 (0:00:01.414) 0:03:18.916 ******* 2026-01-13 00:55:09.227107 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.227110 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.227114 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.227118 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.227121 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.227125 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.227129 | orchestrator | 2026-01-13 00:55:09.227132 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-13 00:55:09.227138 | orchestrator | Tuesday 13 January 2026 00:47:08 +0000 (0:00:02.179) 0:03:21.095 ******* 2026-01-13 00:55:09.227142 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.227146 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.227150 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.227153 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.227157 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.227161 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.227164 | orchestrator | 2026-01-13 00:55:09.227168 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-13 00:55:09.227172 | orchestrator | Tuesday 13 January 2026 00:47:09 +0000 (0:00:00.964) 0:03:22.060 ******* 2026-01-13 00:55:09.227184 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227188 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.227192 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.227196 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.227199 | orchestrator | 2026-01-13 00:55:09.227203 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-13 00:55:09.227218 | orchestrator | Tuesday 13 January 2026 00:47:10 +0000 (0:00:00.851) 0:03:22.911 ******* 2026-01-13 00:55:09.227222 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.227226 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.227230 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.227233 | orchestrator | 2026-01-13 00:55:09.227237 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-13 00:55:09.227243 | orchestrator | Tuesday 13 January 2026 00:47:11 +0000 (0:00:00.269) 0:03:23.180 ******* 2026-01-13 00:55:09.227250 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.227256 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.227263 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.227270 | orchestrator | 2026-01-13 00:55:09.227277 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-13 00:55:09.227284 | orchestrator | Tuesday 13 January 2026 00:47:12 +0000 (0:00:01.317) 0:03:24.498 ******* 2026-01-13 00:55:09.227291 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-13 00:55:09.227298 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-13 00:55:09.227305 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-13 00:55:09.227309 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.227313 | orchestrator | 2026-01-13 00:55:09.227317 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-13 00:55:09.227320 | orchestrator | Tuesday 13 January 2026 00:47:13 +0000 (0:00:00.629) 0:03:25.127 ******* 2026-01-13 00:55:09.227324 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.227328 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.227332 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.227335 | orchestrator | 2026-01-13 00:55:09.227339 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-13 00:55:09.227343 | orchestrator | Tuesday 13 January 2026 00:47:13 +0000 (0:00:00.360) 0:03:25.488 ******* 2026-01-13 00:55:09.227347 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.227350 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.227354 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.227358 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.227361 | orchestrator | 2026-01-13 00:55:09.227365 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-13 00:55:09.227369 | orchestrator | Tuesday 13 January 2026 00:47:14 +0000 (0:00:01.080) 0:03:26.568 ******* 2026-01-13 00:55:09.227373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:55:09.227376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:55:09.227380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:55:09.227384 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227387 | orchestrator | 2026-01-13 00:55:09.227391 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-13 00:55:09.227395 | orchestrator | Tuesday 13 January 2026 00:47:14 +0000 (0:00:00.386) 0:03:26.955 ******* 2026-01-13 00:55:09.227399 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227402 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.227406 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.227410 | orchestrator | 2026-01-13 00:55:09.227413 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-13 00:55:09.227420 | orchestrator | Tuesday 13 January 2026 00:47:15 +0000 (0:00:00.310) 0:03:27.266 ******* 2026-01-13 00:55:09.227424 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227428 | orchestrator | 2026-01-13 00:55:09.227432 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-13 00:55:09.227435 | orchestrator | Tuesday 13 January 2026 00:47:15 +0000 (0:00:00.193) 0:03:27.459 ******* 2026-01-13 00:55:09.227439 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227443 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.227447 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.227450 | orchestrator | 2026-01-13 00:55:09.227454 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-13 00:55:09.227458 | orchestrator | Tuesday 13 January 2026 00:47:15 +0000 (0:00:00.252) 0:03:27.712 ******* 2026-01-13 00:55:09.227462 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227465 | orchestrator | 2026-01-13 00:55:09.227469 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-13 00:55:09.227473 | orchestrator | Tuesday 13 January 2026 00:47:15 +0000 (0:00:00.181) 0:03:27.893 ******* 2026-01-13 00:55:09.227477 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227480 | orchestrator | 2026-01-13 00:55:09.227486 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-13 00:55:09.227492 | orchestrator | Tuesday 13 January 2026 00:47:15 +0000 (0:00:00.195) 0:03:28.089 ******* 2026-01-13 00:55:09.227496 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227500 | orchestrator | 2026-01-13 00:55:09.227504 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-13 00:55:09.227510 | orchestrator | Tuesday 13 January 2026 00:47:16 +0000 (0:00:00.110) 0:03:28.199 ******* 2026-01-13 00:55:09.227514 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227518 | orchestrator | 2026-01-13 00:55:09.227521 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-13 00:55:09.227525 | orchestrator | Tuesday 13 January 2026 00:47:16 +0000 (0:00:00.530) 0:03:28.729 ******* 2026-01-13 00:55:09.227529 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227533 | orchestrator | 2026-01-13 00:55:09.227536 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-13 00:55:09.227540 | orchestrator | Tuesday 13 January 2026 00:47:16 +0000 (0:00:00.175) 0:03:28.905 ******* 2026-01-13 00:55:09.227544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:55:09.227548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:55:09.227551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:55:09.227556 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227563 | orchestrator | 2026-01-13 00:55:09.227568 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-13 00:55:09.227595 | orchestrator | Tuesday 13 January 2026 00:47:17 +0000 (0:00:00.378) 0:03:29.283 ******* 2026-01-13 00:55:09.227604 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227610 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.227616 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.227623 | orchestrator | 2026-01-13 00:55:09.227629 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-13 00:55:09.227636 | orchestrator | Tuesday 13 January 2026 00:47:17 +0000 (0:00:00.295) 0:03:29.579 ******* 2026-01-13 00:55:09.227642 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227649 | orchestrator | 2026-01-13 00:55:09.227652 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-13 00:55:09.227656 | orchestrator | Tuesday 13 January 2026 00:47:17 +0000 (0:00:00.174) 0:03:29.754 ******* 2026-01-13 00:55:09.227660 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227663 | orchestrator | 2026-01-13 00:55:09.227667 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-13 00:55:09.227671 | orchestrator | Tuesday 13 January 2026 00:47:17 +0000 (0:00:00.201) 0:03:29.956 ******* 2026-01-13 00:55:09.227678 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.227682 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.227685 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.227689 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.227693 | orchestrator | 2026-01-13 00:55:09.227697 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-13 00:55:09.227701 | orchestrator | Tuesday 13 January 2026 00:47:18 +0000 (0:00:00.854) 0:03:30.810 ******* 2026-01-13 00:55:09.227704 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.227708 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.227712 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.227716 | orchestrator | 2026-01-13 00:55:09.227719 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-13 00:55:09.227723 | orchestrator | Tuesday 13 January 2026 00:47:18 +0000 (0:00:00.260) 0:03:31.070 ******* 2026-01-13 00:55:09.227727 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.227730 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.227734 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.227738 | orchestrator | 2026-01-13 00:55:09.227742 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-13 00:55:09.227745 | orchestrator | Tuesday 13 January 2026 00:47:20 +0000 (0:00:01.208) 0:03:32.278 ******* 2026-01-13 00:55:09.227771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:55:09.227777 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:55:09.227781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:55:09.227785 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227788 | orchestrator | 2026-01-13 00:55:09.227792 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-13 00:55:09.227796 | orchestrator | Tuesday 13 January 2026 00:47:20 +0000 (0:00:00.698) 0:03:32.976 ******* 2026-01-13 00:55:09.227800 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.227804 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.227807 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.227811 | orchestrator | 2026-01-13 00:55:09.227815 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-13 00:55:09.227819 | orchestrator | Tuesday 13 January 2026 00:47:21 +0000 (0:00:00.413) 0:03:33.389 ******* 2026-01-13 00:55:09.227822 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.227826 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.227830 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.227833 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.227837 | orchestrator | 2026-01-13 00:55:09.227841 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-13 00:55:09.227845 | orchestrator | Tuesday 13 January 2026 00:47:21 +0000 (0:00:00.690) 0:03:34.080 ******* 2026-01-13 00:55:09.227848 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.227852 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.227856 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.227860 | orchestrator | 2026-01-13 00:55:09.227863 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-13 00:55:09.227867 | orchestrator | Tuesday 13 January 2026 00:47:22 +0000 (0:00:00.631) 0:03:34.711 ******* 2026-01-13 00:55:09.227871 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.227874 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.227878 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.227882 | orchestrator | 2026-01-13 00:55:09.227885 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-13 00:55:09.227889 | orchestrator | Tuesday 13 January 2026 00:47:23 +0000 (0:00:01.327) 0:03:36.039 ******* 2026-01-13 00:55:09.227893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:55:09.227902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:55:09.227906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:55:09.227910 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227913 | orchestrator | 2026-01-13 00:55:09.227917 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-13 00:55:09.227921 | orchestrator | Tuesday 13 January 2026 00:47:24 +0000 (0:00:00.712) 0:03:36.752 ******* 2026-01-13 00:55:09.227924 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.227928 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.227932 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.227936 | orchestrator | 2026-01-13 00:55:09.227939 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-13 00:55:09.227943 | orchestrator | Tuesday 13 January 2026 00:47:25 +0000 (0:00:00.355) 0:03:37.108 ******* 2026-01-13 00:55:09.227947 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227950 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.227954 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.227958 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.227961 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.227979 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.227984 | orchestrator | 2026-01-13 00:55:09.227988 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-13 00:55:09.227991 | orchestrator | Tuesday 13 January 2026 00:47:25 +0000 (0:00:00.911) 0:03:38.020 ******* 2026-01-13 00:55:09.227995 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.227999 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.228002 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.228006 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.228010 | orchestrator | 2026-01-13 00:55:09.228014 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-13 00:55:09.228017 | orchestrator | Tuesday 13 January 2026 00:47:26 +0000 (0:00:00.857) 0:03:38.878 ******* 2026-01-13 00:55:09.228021 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228025 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.228028 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.228032 | orchestrator | 2026-01-13 00:55:09.228036 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-13 00:55:09.228039 | orchestrator | Tuesday 13 January 2026 00:47:27 +0000 (0:00:00.545) 0:03:39.424 ******* 2026-01-13 00:55:09.228043 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.228047 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.228051 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.228054 | orchestrator | 2026-01-13 00:55:09.228058 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-13 00:55:09.228062 | orchestrator | Tuesday 13 January 2026 00:47:28 +0000 (0:00:01.170) 0:03:40.594 ******* 2026-01-13 00:55:09.228065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-13 00:55:09.228069 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-13 00:55:09.228073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-13 00:55:09.228076 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.228080 | orchestrator | 2026-01-13 00:55:09.228084 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-13 00:55:09.228088 | orchestrator | Tuesday 13 January 2026 00:47:29 +0000 (0:00:00.643) 0:03:41.238 ******* 2026-01-13 00:55:09.228091 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228096 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.228102 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.228108 | orchestrator | 2026-01-13 00:55:09.228118 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-13 00:55:09.228124 | orchestrator | 2026-01-13 00:55:09.228130 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-13 00:55:09.228141 | orchestrator | Tuesday 13 January 2026 00:47:29 +0000 (0:00:00.644) 0:03:41.882 ******* 2026-01-13 00:55:09.228147 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.228153 | orchestrator | 2026-01-13 00:55:09.228158 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-13 00:55:09.228163 | orchestrator | Tuesday 13 January 2026 00:47:30 +0000 (0:00:00.954) 0:03:42.837 ******* 2026-01-13 00:55:09.228169 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.228176 | orchestrator | 2026-01-13 00:55:09.228182 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-13 00:55:09.228188 | orchestrator | Tuesday 13 January 2026 00:47:31 +0000 (0:00:00.512) 0:03:43.349 ******* 2026-01-13 00:55:09.228194 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228201 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.228207 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.228213 | orchestrator | 2026-01-13 00:55:09.228219 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-13 00:55:09.228226 | orchestrator | Tuesday 13 January 2026 00:47:32 +0000 (0:00:01.104) 0:03:44.454 ******* 2026-01-13 00:55:09.228230 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.228233 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.228237 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.228243 | orchestrator | 2026-01-13 00:55:09.228251 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-13 00:55:09.228260 | orchestrator | Tuesday 13 January 2026 00:47:32 +0000 (0:00:00.324) 0:03:44.779 ******* 2026-01-13 00:55:09.228266 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.228272 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.228278 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.228284 | orchestrator | 2026-01-13 00:55:09.228288 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-13 00:55:09.228292 | orchestrator | Tuesday 13 January 2026 00:47:33 +0000 (0:00:00.325) 0:03:45.104 ******* 2026-01-13 00:55:09.228296 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.228303 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.228306 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.228310 | orchestrator | 2026-01-13 00:55:09.228314 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-13 00:55:09.228318 | orchestrator | Tuesday 13 January 2026 00:47:33 +0000 (0:00:00.330) 0:03:45.435 ******* 2026-01-13 00:55:09.228321 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228325 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.228329 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.228332 | orchestrator | 2026-01-13 00:55:09.228336 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-13 00:55:09.228340 | orchestrator | Tuesday 13 January 2026 00:47:34 +0000 (0:00:01.105) 0:03:46.540 ******* 2026-01-13 00:55:09.228343 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.228347 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.228351 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.228354 | orchestrator | 2026-01-13 00:55:09.228358 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-13 00:55:09.228362 | orchestrator | Tuesday 13 January 2026 00:47:34 +0000 (0:00:00.313) 0:03:46.854 ******* 2026-01-13 00:55:09.228382 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.228386 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.228390 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.228394 | orchestrator | 2026-01-13 00:55:09.228398 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-13 00:55:09.228401 | orchestrator | Tuesday 13 January 2026 00:47:35 +0000 (0:00:00.333) 0:03:47.187 ******* 2026-01-13 00:55:09.228410 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228413 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.228417 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.228421 | orchestrator | 2026-01-13 00:55:09.228425 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-13 00:55:09.228428 | orchestrator | Tuesday 13 January 2026 00:47:35 +0000 (0:00:00.649) 0:03:47.837 ******* 2026-01-13 00:55:09.228432 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228436 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.228439 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.228443 | orchestrator | 2026-01-13 00:55:09.228447 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-13 00:55:09.228451 | orchestrator | Tuesday 13 January 2026 00:47:36 +0000 (0:00:01.059) 0:03:48.896 ******* 2026-01-13 00:55:09.228455 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.228458 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.228462 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.228466 | orchestrator | 2026-01-13 00:55:09.228469 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-13 00:55:09.228473 | orchestrator | Tuesday 13 January 2026 00:47:37 +0000 (0:00:00.318) 0:03:49.214 ******* 2026-01-13 00:55:09.228477 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228481 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.228484 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.228488 | orchestrator | 2026-01-13 00:55:09.228492 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-13 00:55:09.228495 | orchestrator | Tuesday 13 January 2026 00:47:37 +0000 (0:00:00.305) 0:03:49.520 ******* 2026-01-13 00:55:09.228499 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.228503 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.228507 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.228510 | orchestrator | 2026-01-13 00:55:09.228514 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-13 00:55:09.228518 | orchestrator | Tuesday 13 January 2026 00:47:37 +0000 (0:00:00.324) 0:03:49.844 ******* 2026-01-13 00:55:09.228521 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.228525 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.228529 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.228533 | orchestrator | 2026-01-13 00:55:09.228536 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-13 00:55:09.228540 | orchestrator | Tuesday 13 January 2026 00:47:38 +0000 (0:00:00.365) 0:03:50.210 ******* 2026-01-13 00:55:09.228544 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.228548 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.228551 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.228555 | orchestrator | 2026-01-13 00:55:09.228559 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-13 00:55:09.228563 | orchestrator | Tuesday 13 January 2026 00:47:38 +0000 (0:00:00.630) 0:03:50.840 ******* 2026-01-13 00:55:09.228567 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.228570 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.228574 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.228578 | orchestrator | 2026-01-13 00:55:09.228582 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-13 00:55:09.228585 | orchestrator | Tuesday 13 January 2026 00:47:39 +0000 (0:00:00.452) 0:03:51.293 ******* 2026-01-13 00:55:09.228589 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.228593 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.228597 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.228600 | orchestrator | 2026-01-13 00:55:09.228604 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-13 00:55:09.228608 | orchestrator | Tuesday 13 January 2026 00:47:39 +0000 (0:00:00.338) 0:03:51.632 ******* 2026-01-13 00:55:09.228612 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228620 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.228624 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.228628 | orchestrator | 2026-01-13 00:55:09.228632 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-13 00:55:09.228636 | orchestrator | Tuesday 13 January 2026 00:47:39 +0000 (0:00:00.334) 0:03:51.966 ******* 2026-01-13 00:55:09.228639 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228643 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.228647 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.228651 | orchestrator | 2026-01-13 00:55:09.228654 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-13 00:55:09.228658 | orchestrator | Tuesday 13 January 2026 00:47:40 +0000 (0:00:00.718) 0:03:52.684 ******* 2026-01-13 00:55:09.228662 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228666 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.228672 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.228676 | orchestrator | 2026-01-13 00:55:09.228680 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-13 00:55:09.228683 | orchestrator | Tuesday 13 January 2026 00:47:41 +0000 (0:00:00.538) 0:03:53.223 ******* 2026-01-13 00:55:09.228687 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228691 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.228695 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.228699 | orchestrator | 2026-01-13 00:55:09.228703 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-13 00:55:09.228706 | orchestrator | Tuesday 13 January 2026 00:47:41 +0000 (0:00:00.340) 0:03:53.563 ******* 2026-01-13 00:55:09.228710 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.228714 | orchestrator | 2026-01-13 00:55:09.228718 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-13 00:55:09.228722 | orchestrator | Tuesday 13 January 2026 00:47:42 +0000 (0:00:00.923) 0:03:54.486 ******* 2026-01-13 00:55:09.228726 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.228731 | orchestrator | 2026-01-13 00:55:09.228768 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-13 00:55:09.228777 | orchestrator | Tuesday 13 January 2026 00:47:42 +0000 (0:00:00.164) 0:03:54.651 ******* 2026-01-13 00:55:09.228782 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-13 00:55:09.228788 | orchestrator | 2026-01-13 00:55:09.228794 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-13 00:55:09.228800 | orchestrator | Tuesday 13 January 2026 00:47:43 +0000 (0:00:01.169) 0:03:55.821 ******* 2026-01-13 00:55:09.228806 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228812 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.228819 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.228826 | orchestrator | 2026-01-13 00:55:09.228832 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-13 00:55:09.228839 | orchestrator | Tuesday 13 January 2026 00:47:44 +0000 (0:00:00.482) 0:03:56.304 ******* 2026-01-13 00:55:09.228843 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228847 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.228851 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.228854 | orchestrator | 2026-01-13 00:55:09.228858 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-13 00:55:09.228862 | orchestrator | Tuesday 13 January 2026 00:47:44 +0000 (0:00:00.444) 0:03:56.748 ******* 2026-01-13 00:55:09.228865 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.228869 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.228873 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.228877 | orchestrator | 2026-01-13 00:55:09.228880 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-13 00:55:09.228884 | orchestrator | Tuesday 13 January 2026 00:47:45 +0000 (0:00:01.189) 0:03:57.937 ******* 2026-01-13 00:55:09.228888 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.228896 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.228899 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.228903 | orchestrator | 2026-01-13 00:55:09.228907 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-13 00:55:09.228910 | orchestrator | Tuesday 13 January 2026 00:47:46 +0000 (0:00:00.597) 0:03:58.535 ******* 2026-01-13 00:55:09.228914 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.228918 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.228921 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.228925 | orchestrator | 2026-01-13 00:55:09.228929 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-13 00:55:09.228933 | orchestrator | Tuesday 13 January 2026 00:47:46 +0000 (0:00:00.552) 0:03:59.088 ******* 2026-01-13 00:55:09.228936 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228940 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.228944 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.228947 | orchestrator | 2026-01-13 00:55:09.228951 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-13 00:55:09.228955 | orchestrator | Tuesday 13 January 2026 00:47:47 +0000 (0:00:00.698) 0:03:59.786 ******* 2026-01-13 00:55:09.228958 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.228962 | orchestrator | 2026-01-13 00:55:09.228966 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-13 00:55:09.228969 | orchestrator | Tuesday 13 January 2026 00:47:49 +0000 (0:00:01.347) 0:04:01.134 ******* 2026-01-13 00:55:09.228973 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.228977 | orchestrator | 2026-01-13 00:55:09.228980 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-13 00:55:09.228984 | orchestrator | Tuesday 13 January 2026 00:47:49 +0000 (0:00:00.646) 0:04:01.780 ******* 2026-01-13 00:55:09.228988 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-13 00:55:09.228992 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:55:09.228995 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:55:09.228999 | orchestrator | changed: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-13 00:55:09.229003 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-13 00:55:09.229006 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-13 00:55:09.229010 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-13 00:55:09.229014 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-01-13 00:55:09.229017 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-13 00:55:09.229021 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-13 00:55:09.229025 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-13 00:55:09.229028 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-13 00:55:09.229032 | orchestrator | 2026-01-13 00:55:09.229036 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-13 00:55:09.229040 | orchestrator | Tuesday 13 January 2026 00:47:52 +0000 (0:00:03.055) 0:04:04.836 ******* 2026-01-13 00:55:09.229046 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.229050 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.229053 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.229057 | orchestrator | 2026-01-13 00:55:09.229061 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-13 00:55:09.229064 | orchestrator | Tuesday 13 January 2026 00:47:54 +0000 (0:00:01.562) 0:04:06.398 ******* 2026-01-13 00:55:09.229068 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.229072 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.229075 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.229079 | orchestrator | 2026-01-13 00:55:09.229084 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-13 00:55:09.229090 | orchestrator | Tuesday 13 January 2026 00:47:54 +0000 (0:00:00.526) 0:04:06.925 ******* 2026-01-13 00:55:09.229102 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.229111 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.229117 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.229123 | orchestrator | 2026-01-13 00:55:09.229129 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-13 00:55:09.229135 | orchestrator | Tuesday 13 January 2026 00:47:56 +0000 (0:00:01.471) 0:04:08.396 ******* 2026-01-13 00:55:09.229163 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.229170 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.229176 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.229182 | orchestrator | 2026-01-13 00:55:09.229188 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-13 00:55:09.229194 | orchestrator | Tuesday 13 January 2026 00:48:00 +0000 (0:00:03.806) 0:04:12.202 ******* 2026-01-13 00:55:09.229200 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.229207 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.229213 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.229219 | orchestrator | 2026-01-13 00:55:09.229226 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-13 00:55:09.229232 | orchestrator | Tuesday 13 January 2026 00:48:01 +0000 (0:00:01.256) 0:04:13.459 ******* 2026-01-13 00:55:09.229239 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.229245 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.229251 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.229258 | orchestrator | 2026-01-13 00:55:09.229264 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-13 00:55:09.229270 | orchestrator | Tuesday 13 January 2026 00:48:01 +0000 (0:00:00.304) 0:04:13.763 ******* 2026-01-13 00:55:09.229276 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.229283 | orchestrator | 2026-01-13 00:55:09.229289 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-13 00:55:09.229295 | orchestrator | Tuesday 13 January 2026 00:48:02 +0000 (0:00:01.005) 0:04:14.769 ******* 2026-01-13 00:55:09.229301 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.229307 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.229314 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.229321 | orchestrator | 2026-01-13 00:55:09.229327 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-13 00:55:09.229333 | orchestrator | Tuesday 13 January 2026 00:48:03 +0000 (0:00:00.537) 0:04:15.306 ******* 2026-01-13 00:55:09.229339 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.229345 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.229352 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.229358 | orchestrator | 2026-01-13 00:55:09.229364 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-13 00:55:09.229370 | orchestrator | Tuesday 13 January 2026 00:48:03 +0000 (0:00:00.317) 0:04:15.623 ******* 2026-01-13 00:55:09.229377 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.229382 | orchestrator | 2026-01-13 00:55:09.229388 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-13 00:55:09.229395 | orchestrator | Tuesday 13 January 2026 00:48:04 +0000 (0:00:00.717) 0:04:16.341 ******* 2026-01-13 00:55:09.229401 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.229407 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.229413 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.229420 | orchestrator | 2026-01-13 00:55:09.229426 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-13 00:55:09.229432 | orchestrator | Tuesday 13 January 2026 00:48:05 +0000 (0:00:01.548) 0:04:17.890 ******* 2026-01-13 00:55:09.229439 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.229445 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.229455 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.229462 | orchestrator | 2026-01-13 00:55:09.229469 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-13 00:55:09.229475 | orchestrator | Tuesday 13 January 2026 00:48:06 +0000 (0:00:01.196) 0:04:19.086 ******* 2026-01-13 00:55:09.229482 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.229488 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.229494 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.229500 | orchestrator | 2026-01-13 00:55:09.229506 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-13 00:55:09.229513 | orchestrator | Tuesday 13 January 2026 00:48:08 +0000 (0:00:01.952) 0:04:21.038 ******* 2026-01-13 00:55:09.229519 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.229525 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.229531 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.229537 | orchestrator | 2026-01-13 00:55:09.229543 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-13 00:55:09.229549 | orchestrator | Tuesday 13 January 2026 00:48:11 +0000 (0:00:02.234) 0:04:23.273 ******* 2026-01-13 00:55:09.229556 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.229562 | orchestrator | 2026-01-13 00:55:09.229569 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-13 00:55:09.229575 | orchestrator | Tuesday 13 January 2026 00:48:11 +0000 (0:00:00.738) 0:04:24.012 ******* 2026-01-13 00:55:09.229586 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-13 00:55:09.229590 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.229593 | orchestrator | 2026-01-13 00:55:09.229597 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-13 00:55:09.229601 | orchestrator | Tuesday 13 January 2026 00:48:33 +0000 (0:00:21.752) 0:04:45.764 ******* 2026-01-13 00:55:09.229605 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.229608 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.229612 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.229616 | orchestrator | 2026-01-13 00:55:09.229620 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-13 00:55:09.229623 | orchestrator | Tuesday 13 January 2026 00:48:44 +0000 (0:00:11.068) 0:04:56.832 ******* 2026-01-13 00:55:09.229627 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.229631 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.229634 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.229638 | orchestrator | 2026-01-13 00:55:09.229642 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-13 00:55:09.229662 | orchestrator | Tuesday 13 January 2026 00:48:45 +0000 (0:00:00.571) 0:04:57.404 ******* 2026-01-13 00:55:09.229668 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f30337ccff23b6379a097436e4a93a5b8d5b6a4e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-13 00:55:09.229672 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f30337ccff23b6379a097436e4a93a5b8d5b6a4e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-13 00:55:09.229677 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f30337ccff23b6379a097436e4a93a5b8d5b6a4e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-13 00:55:09.229686 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f30337ccff23b6379a097436e4a93a5b8d5b6a4e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-13 00:55:09.229690 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f30337ccff23b6379a097436e4a93a5b8d5b6a4e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-13 00:55:09.229694 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f30337ccff23b6379a097436e4a93a5b8d5b6a4e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__f30337ccff23b6379a097436e4a93a5b8d5b6a4e'}])  2026-01-13 00:55:09.229699 | orchestrator | 2026-01-13 00:55:09.229703 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-13 00:55:09.229707 | orchestrator | Tuesday 13 January 2026 00:49:00 +0000 (0:00:15.229) 0:05:12.634 ******* 2026-01-13 00:55:09.229710 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.229714 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.229718 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.229721 | orchestrator | 2026-01-13 00:55:09.229725 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-13 00:55:09.229729 | orchestrator | Tuesday 13 January 2026 00:49:00 +0000 (0:00:00.313) 0:05:12.947 ******* 2026-01-13 00:55:09.229733 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.229736 | orchestrator | 2026-01-13 00:55:09.229740 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-13 00:55:09.229744 | orchestrator | Tuesday 13 January 2026 00:49:01 +0000 (0:00:00.742) 0:05:13.690 ******* 2026-01-13 00:55:09.229747 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.229765 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.229769 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.229773 | orchestrator | 2026-01-13 00:55:09.229776 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-13 00:55:09.229782 | orchestrator | Tuesday 13 January 2026 00:49:01 +0000 (0:00:00.319) 0:05:14.009 ******* 2026-01-13 00:55:09.229786 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.229790 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.229793 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.229797 | orchestrator | 2026-01-13 00:55:09.229801 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-13 00:55:09.229804 | orchestrator | Tuesday 13 January 2026 00:49:02 +0000 (0:00:00.395) 0:05:14.405 ******* 2026-01-13 00:55:09.229809 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-13 00:55:09.229812 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-13 00:55:09.229816 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-13 00:55:09.229820 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.229824 | orchestrator | 2026-01-13 00:55:09.229827 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-13 00:55:09.229831 | orchestrator | Tuesday 13 January 2026 00:49:03 +0000 (0:00:01.238) 0:05:15.644 ******* 2026-01-13 00:55:09.229835 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.229851 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.229858 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.229862 | orchestrator | 2026-01-13 00:55:09.229866 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-13 00:55:09.229869 | orchestrator | 2026-01-13 00:55:09.229873 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-13 00:55:09.229877 | orchestrator | Tuesday 13 January 2026 00:49:04 +0000 (0:00:00.546) 0:05:16.190 ******* 2026-01-13 00:55:09.229881 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.229885 | orchestrator | 2026-01-13 00:55:09.229888 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-13 00:55:09.229892 | orchestrator | Tuesday 13 January 2026 00:49:04 +0000 (0:00:00.480) 0:05:16.671 ******* 2026-01-13 00:55:09.229896 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.229900 | orchestrator | 2026-01-13 00:55:09.229903 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-13 00:55:09.229907 | orchestrator | Tuesday 13 January 2026 00:49:05 +0000 (0:00:00.752) 0:05:17.423 ******* 2026-01-13 00:55:09.229911 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.229914 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.229918 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.229922 | orchestrator | 2026-01-13 00:55:09.229925 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-13 00:55:09.229929 | orchestrator | Tuesday 13 January 2026 00:49:06 +0000 (0:00:00.733) 0:05:18.157 ******* 2026-01-13 00:55:09.229933 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.229937 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.229940 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.229944 | orchestrator | 2026-01-13 00:55:09.229948 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-13 00:55:09.229951 | orchestrator | Tuesday 13 January 2026 00:49:06 +0000 (0:00:00.305) 0:05:18.463 ******* 2026-01-13 00:55:09.229955 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.229959 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.229962 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.229966 | orchestrator | 2026-01-13 00:55:09.229970 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-13 00:55:09.229973 | orchestrator | Tuesday 13 January 2026 00:49:06 +0000 (0:00:00.527) 0:05:18.991 ******* 2026-01-13 00:55:09.229977 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.229981 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.229985 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.229988 | orchestrator | 2026-01-13 00:55:09.229992 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-13 00:55:09.229996 | orchestrator | Tuesday 13 January 2026 00:49:07 +0000 (0:00:00.318) 0:05:19.309 ******* 2026-01-13 00:55:09.229999 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.230003 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.230007 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.230010 | orchestrator | 2026-01-13 00:55:09.230034 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-13 00:55:09.230039 | orchestrator | Tuesday 13 January 2026 00:49:08 +0000 (0:00:00.788) 0:05:20.097 ******* 2026-01-13 00:55:09.230042 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.230046 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.230050 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.230053 | orchestrator | 2026-01-13 00:55:09.230057 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-13 00:55:09.230061 | orchestrator | Tuesday 13 January 2026 00:49:08 +0000 (0:00:00.329) 0:05:20.427 ******* 2026-01-13 00:55:09.230065 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.230068 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.230072 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.230078 | orchestrator | 2026-01-13 00:55:09.230085 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-13 00:55:09.230091 | orchestrator | Tuesday 13 January 2026 00:49:08 +0000 (0:00:00.545) 0:05:20.973 ******* 2026-01-13 00:55:09.230097 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.230103 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.230109 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.230115 | orchestrator | 2026-01-13 00:55:09.230122 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-13 00:55:09.230129 | orchestrator | Tuesday 13 January 2026 00:49:09 +0000 (0:00:00.716) 0:05:21.689 ******* 2026-01-13 00:55:09.230135 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.230142 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.230147 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.230151 | orchestrator | 2026-01-13 00:55:09.230155 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-13 00:55:09.230158 | orchestrator | Tuesday 13 January 2026 00:49:10 +0000 (0:00:00.708) 0:05:22.397 ******* 2026-01-13 00:55:09.230165 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.230168 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.230172 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.230176 | orchestrator | 2026-01-13 00:55:09.230180 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-13 00:55:09.230187 | orchestrator | Tuesday 13 January 2026 00:49:10 +0000 (0:00:00.225) 0:05:22.623 ******* 2026-01-13 00:55:09.230193 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.230199 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.230205 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.230212 | orchestrator | 2026-01-13 00:55:09.230218 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-13 00:55:09.230225 | orchestrator | Tuesday 13 January 2026 00:49:10 +0000 (0:00:00.424) 0:05:23.048 ******* 2026-01-13 00:55:09.230231 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.230238 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.230242 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.230245 | orchestrator | 2026-01-13 00:55:09.230249 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-13 00:55:09.230268 | orchestrator | Tuesday 13 January 2026 00:49:11 +0000 (0:00:00.254) 0:05:23.303 ******* 2026-01-13 00:55:09.230274 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.230281 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.230288 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.230295 | orchestrator | 2026-01-13 00:55:09.230302 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-13 00:55:09.230309 | orchestrator | Tuesday 13 January 2026 00:49:11 +0000 (0:00:00.261) 0:05:23.565 ******* 2026-01-13 00:55:09.230316 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.230322 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.230329 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.230336 | orchestrator | 2026-01-13 00:55:09.230342 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-13 00:55:09.230345 | orchestrator | Tuesday 13 January 2026 00:49:11 +0000 (0:00:00.261) 0:05:23.827 ******* 2026-01-13 00:55:09.230349 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.230353 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.230356 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.230360 | orchestrator | 2026-01-13 00:55:09.230364 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-13 00:55:09.230371 | orchestrator | Tuesday 13 January 2026 00:49:12 +0000 (0:00:00.310) 0:05:24.137 ******* 2026-01-13 00:55:09.230377 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.230383 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.230389 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.230395 | orchestrator | 2026-01-13 00:55:09.230402 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-13 00:55:09.230412 | orchestrator | Tuesday 13 January 2026 00:49:12 +0000 (0:00:00.672) 0:05:24.810 ******* 2026-01-13 00:55:09.230419 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.230425 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.230432 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.230438 | orchestrator | 2026-01-13 00:55:09.230444 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-13 00:55:09.230451 | orchestrator | Tuesday 13 January 2026 00:49:13 +0000 (0:00:00.321) 0:05:25.131 ******* 2026-01-13 00:55:09.230455 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.230462 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.230468 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.230474 | orchestrator | 2026-01-13 00:55:09.230480 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-13 00:55:09.230487 | orchestrator | Tuesday 13 January 2026 00:49:13 +0000 (0:00:00.307) 0:05:25.439 ******* 2026-01-13 00:55:09.230493 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.230500 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.230506 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.230512 | orchestrator | 2026-01-13 00:55:09.230518 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-13 00:55:09.230525 | orchestrator | Tuesday 13 January 2026 00:49:14 +0000 (0:00:00.742) 0:05:26.181 ******* 2026-01-13 00:55:09.230531 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-13 00:55:09.230537 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-13 00:55:09.230543 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-13 00:55:09.230550 | orchestrator | 2026-01-13 00:55:09.230556 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-13 00:55:09.230562 | orchestrator | Tuesday 13 January 2026 00:49:14 +0000 (0:00:00.584) 0:05:26.765 ******* 2026-01-13 00:55:09.230568 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.230575 | orchestrator | 2026-01-13 00:55:09.230580 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-13 00:55:09.230584 | orchestrator | Tuesday 13 January 2026 00:49:15 +0000 (0:00:00.498) 0:05:27.263 ******* 2026-01-13 00:55:09.230588 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.230591 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.230595 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.230599 | orchestrator | 2026-01-13 00:55:09.230602 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-13 00:55:09.230606 | orchestrator | Tuesday 13 January 2026 00:49:15 +0000 (0:00:00.677) 0:05:27.941 ******* 2026-01-13 00:55:09.230610 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.230613 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.230617 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.230621 | orchestrator | 2026-01-13 00:55:09.230624 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-13 00:55:09.230628 | orchestrator | Tuesday 13 January 2026 00:49:16 +0000 (0:00:00.515) 0:05:28.456 ******* 2026-01-13 00:55:09.230632 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-13 00:55:09.230636 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-13 00:55:09.230640 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-13 00:55:09.230646 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-13 00:55:09.230650 | orchestrator | 2026-01-13 00:55:09.230653 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-13 00:55:09.230657 | orchestrator | Tuesday 13 January 2026 00:49:27 +0000 (0:00:10.759) 0:05:39.216 ******* 2026-01-13 00:55:09.230661 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.230664 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.230671 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.230675 | orchestrator | 2026-01-13 00:55:09.230679 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-13 00:55:09.230682 | orchestrator | Tuesday 13 January 2026 00:49:27 +0000 (0:00:00.361) 0:05:39.578 ******* 2026-01-13 00:55:09.230686 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-13 00:55:09.230690 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-13 00:55:09.230694 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-13 00:55:09.230698 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-13 00:55:09.230701 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:55:09.230719 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:55:09.230723 | orchestrator | 2026-01-13 00:55:09.230727 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-13 00:55:09.230731 | orchestrator | Tuesday 13 January 2026 00:49:29 +0000 (0:00:02.190) 0:05:41.768 ******* 2026-01-13 00:55:09.230735 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-13 00:55:09.230738 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-13 00:55:09.230742 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-13 00:55:09.230746 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-13 00:55:09.230771 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-13 00:55:09.230776 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-13 00:55:09.230780 | orchestrator | 2026-01-13 00:55:09.230783 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-13 00:55:09.230787 | orchestrator | Tuesday 13 January 2026 00:49:30 +0000 (0:00:01.278) 0:05:43.047 ******* 2026-01-13 00:55:09.230791 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.230795 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.230798 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.230802 | orchestrator | 2026-01-13 00:55:09.230806 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-13 00:55:09.230809 | orchestrator | Tuesday 13 January 2026 00:49:31 +0000 (0:00:01.039) 0:05:44.087 ******* 2026-01-13 00:55:09.230813 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.230817 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.230820 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.230824 | orchestrator | 2026-01-13 00:55:09.230828 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-13 00:55:09.230831 | orchestrator | Tuesday 13 January 2026 00:49:32 +0000 (0:00:00.322) 0:05:44.409 ******* 2026-01-13 00:55:09.230835 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.230839 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.230843 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.230846 | orchestrator | 2026-01-13 00:55:09.230850 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-13 00:55:09.230854 | orchestrator | Tuesday 13 January 2026 00:49:32 +0000 (0:00:00.329) 0:05:44.739 ******* 2026-01-13 00:55:09.230857 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.230861 | orchestrator | 2026-01-13 00:55:09.230865 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-13 00:55:09.230868 | orchestrator | Tuesday 13 January 2026 00:49:33 +0000 (0:00:00.815) 0:05:45.554 ******* 2026-01-13 00:55:09.230872 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.230876 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.230879 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.230883 | orchestrator | 2026-01-13 00:55:09.230887 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-13 00:55:09.230891 | orchestrator | Tuesday 13 January 2026 00:49:33 +0000 (0:00:00.322) 0:05:45.876 ******* 2026-01-13 00:55:09.230894 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.230898 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.230908 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.230912 | orchestrator | 2026-01-13 00:55:09.230916 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-13 00:55:09.230919 | orchestrator | Tuesday 13 January 2026 00:49:34 +0000 (0:00:00.320) 0:05:46.197 ******* 2026-01-13 00:55:09.230923 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.230927 | orchestrator | 2026-01-13 00:55:09.230931 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-13 00:55:09.230934 | orchestrator | Tuesday 13 January 2026 00:49:34 +0000 (0:00:00.724) 0:05:46.922 ******* 2026-01-13 00:55:09.230938 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.230942 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.230945 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.230949 | orchestrator | 2026-01-13 00:55:09.230953 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-13 00:55:09.230956 | orchestrator | Tuesday 13 January 2026 00:49:36 +0000 (0:00:01.235) 0:05:48.158 ******* 2026-01-13 00:55:09.230960 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.230964 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.230967 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.230971 | orchestrator | 2026-01-13 00:55:09.230975 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-13 00:55:09.230978 | orchestrator | Tuesday 13 January 2026 00:49:37 +0000 (0:00:01.214) 0:05:49.373 ******* 2026-01-13 00:55:09.230982 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.230986 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.230989 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.230993 | orchestrator | 2026-01-13 00:55:09.230999 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-13 00:55:09.231003 | orchestrator | Tuesday 13 January 2026 00:49:39 +0000 (0:00:01.815) 0:05:51.188 ******* 2026-01-13 00:55:09.231007 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.231010 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.231014 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.231018 | orchestrator | 2026-01-13 00:55:09.231021 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-13 00:55:09.231025 | orchestrator | Tuesday 13 January 2026 00:49:41 +0000 (0:00:02.155) 0:05:53.343 ******* 2026-01-13 00:55:09.231029 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.231032 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.231036 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-13 00:55:09.231040 | orchestrator | 2026-01-13 00:55:09.231044 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-13 00:55:09.231049 | orchestrator | Tuesday 13 January 2026 00:49:41 +0000 (0:00:00.410) 0:05:53.754 ******* 2026-01-13 00:55:09.231075 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-13 00:55:09.231084 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-13 00:55:09.231090 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-13 00:55:09.231096 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-13 00:55:09.231102 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-13 00:55:09.231107 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-13 00:55:09.231113 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-13 00:55:09.231118 | orchestrator | 2026-01-13 00:55:09.231124 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-13 00:55:09.231164 | orchestrator | Tuesday 13 January 2026 00:50:17 +0000 (0:00:35.440) 0:06:29.195 ******* 2026-01-13 00:55:09.231172 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-13 00:55:09.231178 | orchestrator | 2026-01-13 00:55:09.231184 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-13 00:55:09.231190 | orchestrator | Tuesday 13 January 2026 00:50:18 +0000 (0:00:01.200) 0:06:30.396 ******* 2026-01-13 00:55:09.231196 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.231202 | orchestrator | 2026-01-13 00:55:09.231208 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-13 00:55:09.231214 | orchestrator | Tuesday 13 January 2026 00:50:18 +0000 (0:00:00.312) 0:06:30.708 ******* 2026-01-13 00:55:09.231220 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.231226 | orchestrator | 2026-01-13 00:55:09.231233 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-13 00:55:09.231240 | orchestrator | Tuesday 13 January 2026 00:50:18 +0000 (0:00:00.141) 0:06:30.850 ******* 2026-01-13 00:55:09.231247 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-13 00:55:09.231254 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-13 00:55:09.231260 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-13 00:55:09.231267 | orchestrator | 2026-01-13 00:55:09.231274 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-13 00:55:09.231279 | orchestrator | Tuesday 13 January 2026 00:50:25 +0000 (0:00:06.291) 0:06:37.141 ******* 2026-01-13 00:55:09.231283 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-13 00:55:09.231287 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-13 00:55:09.231291 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-13 00:55:09.231294 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-13 00:55:09.231298 | orchestrator | 2026-01-13 00:55:09.231302 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-13 00:55:09.231305 | orchestrator | Tuesday 13 January 2026 00:50:30 +0000 (0:00:05.428) 0:06:42.569 ******* 2026-01-13 00:55:09.231309 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.231313 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.231316 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.231320 | orchestrator | 2026-01-13 00:55:09.231324 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-13 00:55:09.231328 | orchestrator | Tuesday 13 January 2026 00:50:31 +0000 (0:00:00.808) 0:06:43.378 ******* 2026-01-13 00:55:09.231331 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.231336 | orchestrator | 2026-01-13 00:55:09.231342 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-13 00:55:09.231348 | orchestrator | Tuesday 13 January 2026 00:50:32 +0000 (0:00:00.842) 0:06:44.220 ******* 2026-01-13 00:55:09.231354 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.231361 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.231367 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.231373 | orchestrator | 2026-01-13 00:55:09.231380 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-13 00:55:09.231387 | orchestrator | Tuesday 13 January 2026 00:50:32 +0000 (0:00:00.348) 0:06:44.568 ******* 2026-01-13 00:55:09.231393 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.231400 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.231406 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.231412 | orchestrator | 2026-01-13 00:55:09.231422 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-13 00:55:09.231429 | orchestrator | Tuesday 13 January 2026 00:50:33 +0000 (0:00:01.090) 0:06:45.659 ******* 2026-01-13 00:55:09.231440 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-13 00:55:09.231446 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-13 00:55:09.231452 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-13 00:55:09.231458 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.231464 | orchestrator | 2026-01-13 00:55:09.231471 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-13 00:55:09.231477 | orchestrator | Tuesday 13 January 2026 00:50:34 +0000 (0:00:00.600) 0:06:46.259 ******* 2026-01-13 00:55:09.231482 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.231488 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.231493 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.231499 | orchestrator | 2026-01-13 00:55:09.231505 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-13 00:55:09.231511 | orchestrator | 2026-01-13 00:55:09.231517 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-13 00:55:09.231556 | orchestrator | Tuesday 13 January 2026 00:50:34 +0000 (0:00:00.815) 0:06:47.075 ******* 2026-01-13 00:55:09.231563 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.231569 | orchestrator | 2026-01-13 00:55:09.231576 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-13 00:55:09.231582 | orchestrator | Tuesday 13 January 2026 00:50:35 +0000 (0:00:00.501) 0:06:47.576 ******* 2026-01-13 00:55:09.231587 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.231593 | orchestrator | 2026-01-13 00:55:09.231598 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-13 00:55:09.231604 | orchestrator | Tuesday 13 January 2026 00:50:36 +0000 (0:00:00.804) 0:06:48.380 ******* 2026-01-13 00:55:09.231611 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.231617 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.231623 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.231628 | orchestrator | 2026-01-13 00:55:09.231634 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-13 00:55:09.231639 | orchestrator | Tuesday 13 January 2026 00:50:36 +0000 (0:00:00.346) 0:06:48.726 ******* 2026-01-13 00:55:09.231645 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.231652 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.231659 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.231665 | orchestrator | 2026-01-13 00:55:09.231672 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-13 00:55:09.231679 | orchestrator | Tuesday 13 January 2026 00:50:37 +0000 (0:00:00.712) 0:06:49.439 ******* 2026-01-13 00:55:09.231686 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.231693 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.231700 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.231707 | orchestrator | 2026-01-13 00:55:09.231714 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-13 00:55:09.231721 | orchestrator | Tuesday 13 January 2026 00:50:38 +0000 (0:00:00.825) 0:06:50.265 ******* 2026-01-13 00:55:09.231728 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.231734 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.231741 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.231748 | orchestrator | 2026-01-13 00:55:09.231767 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-13 00:55:09.231773 | orchestrator | Tuesday 13 January 2026 00:50:39 +0000 (0:00:01.482) 0:06:51.747 ******* 2026-01-13 00:55:09.231780 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.231787 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.231794 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.231801 | orchestrator | 2026-01-13 00:55:09.231807 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-13 00:55:09.231821 | orchestrator | Tuesday 13 January 2026 00:50:39 +0000 (0:00:00.314) 0:06:52.062 ******* 2026-01-13 00:55:09.231827 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.231834 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.231841 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.231848 | orchestrator | 2026-01-13 00:55:09.231855 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-13 00:55:09.231862 | orchestrator | Tuesday 13 January 2026 00:50:40 +0000 (0:00:00.333) 0:06:52.395 ******* 2026-01-13 00:55:09.231869 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.231876 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.231882 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.231889 | orchestrator | 2026-01-13 00:55:09.231896 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-13 00:55:09.231902 | orchestrator | Tuesday 13 January 2026 00:50:40 +0000 (0:00:00.325) 0:06:52.721 ******* 2026-01-13 00:55:09.231908 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.231914 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.231920 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.231926 | orchestrator | 2026-01-13 00:55:09.231931 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-13 00:55:09.231937 | orchestrator | Tuesday 13 January 2026 00:50:41 +0000 (0:00:00.996) 0:06:53.717 ******* 2026-01-13 00:55:09.231943 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.231948 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.231954 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.231960 | orchestrator | 2026-01-13 00:55:09.231966 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-13 00:55:09.231972 | orchestrator | Tuesday 13 January 2026 00:50:42 +0000 (0:00:00.782) 0:06:54.500 ******* 2026-01-13 00:55:09.231978 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.231984 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.231990 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.231996 | orchestrator | 2026-01-13 00:55:09.232002 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-13 00:55:09.232012 | orchestrator | Tuesday 13 January 2026 00:50:42 +0000 (0:00:00.319) 0:06:54.820 ******* 2026-01-13 00:55:09.232018 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.232024 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.232030 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.232036 | orchestrator | 2026-01-13 00:55:09.232043 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-13 00:55:09.232049 | orchestrator | Tuesday 13 January 2026 00:50:43 +0000 (0:00:00.302) 0:06:55.122 ******* 2026-01-13 00:55:09.232055 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.232060 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.232066 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.232072 | orchestrator | 2026-01-13 00:55:09.232078 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-13 00:55:09.232084 | orchestrator | Tuesday 13 January 2026 00:50:43 +0000 (0:00:00.592) 0:06:55.715 ******* 2026-01-13 00:55:09.232091 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.232097 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.232104 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.232110 | orchestrator | 2026-01-13 00:55:09.232117 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-13 00:55:09.232128 | orchestrator | Tuesday 13 January 2026 00:50:43 +0000 (0:00:00.346) 0:06:56.061 ******* 2026-01-13 00:55:09.232134 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.232141 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.232147 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.232153 | orchestrator | 2026-01-13 00:55:09.232160 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-13 00:55:09.232165 | orchestrator | Tuesday 13 January 2026 00:50:44 +0000 (0:00:00.348) 0:06:56.410 ******* 2026-01-13 00:55:09.232172 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.232183 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.232189 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.232196 | orchestrator | 2026-01-13 00:55:09.232201 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-13 00:55:09.232208 | orchestrator | Tuesday 13 January 2026 00:50:44 +0000 (0:00:00.300) 0:06:56.710 ******* 2026-01-13 00:55:09.232214 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.232221 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.232227 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.232233 | orchestrator | 2026-01-13 00:55:09.232240 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-13 00:55:09.232245 | orchestrator | Tuesday 13 January 2026 00:50:45 +0000 (0:00:00.537) 0:06:57.248 ******* 2026-01-13 00:55:09.232251 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.232257 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.232263 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.232269 | orchestrator | 2026-01-13 00:55:09.232275 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-13 00:55:09.232283 | orchestrator | Tuesday 13 January 2026 00:50:45 +0000 (0:00:00.307) 0:06:57.555 ******* 2026-01-13 00:55:09.232290 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.232297 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.232304 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.232311 | orchestrator | 2026-01-13 00:55:09.232317 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-13 00:55:09.232324 | orchestrator | Tuesday 13 January 2026 00:50:45 +0000 (0:00:00.352) 0:06:57.908 ******* 2026-01-13 00:55:09.232330 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.232336 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.232343 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.232349 | orchestrator | 2026-01-13 00:55:09.232355 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-13 00:55:09.232362 | orchestrator | Tuesday 13 January 2026 00:50:46 +0000 (0:00:00.770) 0:06:58.678 ******* 2026-01-13 00:55:09.232369 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.232376 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.232383 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.232390 | orchestrator | 2026-01-13 00:55:09.232397 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-13 00:55:09.232404 | orchestrator | Tuesday 13 January 2026 00:50:46 +0000 (0:00:00.315) 0:06:58.994 ******* 2026-01-13 00:55:09.232411 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-13 00:55:09.232418 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-13 00:55:09.232425 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-13 00:55:09.232432 | orchestrator | 2026-01-13 00:55:09.232439 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-13 00:55:09.232446 | orchestrator | Tuesday 13 January 2026 00:50:47 +0000 (0:00:00.607) 0:06:59.601 ******* 2026-01-13 00:55:09.232453 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.232460 | orchestrator | 2026-01-13 00:55:09.232467 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-13 00:55:09.232474 | orchestrator | Tuesday 13 January 2026 00:50:48 +0000 (0:00:00.520) 0:07:00.121 ******* 2026-01-13 00:55:09.232481 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.232487 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.232494 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.232501 | orchestrator | 2026-01-13 00:55:09.232508 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-13 00:55:09.232515 | orchestrator | Tuesday 13 January 2026 00:50:48 +0000 (0:00:00.608) 0:07:00.730 ******* 2026-01-13 00:55:09.232527 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.232534 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.232541 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.232548 | orchestrator | 2026-01-13 00:55:09.232555 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-13 00:55:09.232562 | orchestrator | Tuesday 13 January 2026 00:50:48 +0000 (0:00:00.305) 0:07:01.036 ******* 2026-01-13 00:55:09.232568 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.232574 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.232580 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.232586 | orchestrator | 2026-01-13 00:55:09.232596 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-13 00:55:09.232602 | orchestrator | Tuesday 13 January 2026 00:50:49 +0000 (0:00:00.657) 0:07:01.693 ******* 2026-01-13 00:55:09.232609 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.232616 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.232623 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.232630 | orchestrator | 2026-01-13 00:55:09.232637 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-13 00:55:09.232643 | orchestrator | Tuesday 13 January 2026 00:50:49 +0000 (0:00:00.354) 0:07:02.048 ******* 2026-01-13 00:55:09.232650 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-13 00:55:09.232657 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-13 00:55:09.232664 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-13 00:55:09.232678 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-13 00:55:09.232685 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-13 00:55:09.232692 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-13 00:55:09.232699 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-13 00:55:09.232706 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-13 00:55:09.232713 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-13 00:55:09.232720 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-13 00:55:09.232727 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-13 00:55:09.232734 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-13 00:55:09.232741 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-13 00:55:09.232748 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-13 00:55:09.232767 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-13 00:55:09.232773 | orchestrator | 2026-01-13 00:55:09.232779 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-13 00:55:09.232785 | orchestrator | Tuesday 13 January 2026 00:50:55 +0000 (0:00:05.525) 0:07:07.574 ******* 2026-01-13 00:55:09.232791 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.232797 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.232804 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.232810 | orchestrator | 2026-01-13 00:55:09.232818 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-13 00:55:09.232825 | orchestrator | Tuesday 13 January 2026 00:50:55 +0000 (0:00:00.316) 0:07:07.890 ******* 2026-01-13 00:55:09.232832 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.232839 | orchestrator | 2026-01-13 00:55:09.232845 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-13 00:55:09.232856 | orchestrator | Tuesday 13 January 2026 00:50:56 +0000 (0:00:00.524) 0:07:08.414 ******* 2026-01-13 00:55:09.232862 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-13 00:55:09.232869 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-13 00:55:09.232875 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-13 00:55:09.232882 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-13 00:55:09.232889 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-13 00:55:09.232895 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-13 00:55:09.232901 | orchestrator | 2026-01-13 00:55:09.232908 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-13 00:55:09.232914 | orchestrator | Tuesday 13 January 2026 00:50:57 +0000 (0:00:01.266) 0:07:09.681 ******* 2026-01-13 00:55:09.232920 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:55:09.232926 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-13 00:55:09.232933 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-13 00:55:09.232939 | orchestrator | 2026-01-13 00:55:09.232946 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-13 00:55:09.232952 | orchestrator | Tuesday 13 January 2026 00:50:59 +0000 (0:00:02.105) 0:07:11.787 ******* 2026-01-13 00:55:09.232959 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-13 00:55:09.232965 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-13 00:55:09.232971 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.232977 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-13 00:55:09.232983 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-13 00:55:09.232989 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.232995 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-13 00:55:09.233001 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-13 00:55:09.233008 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.233014 | orchestrator | 2026-01-13 00:55:09.233020 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-13 00:55:09.233027 | orchestrator | Tuesday 13 January 2026 00:51:01 +0000 (0:00:01.328) 0:07:13.116 ******* 2026-01-13 00:55:09.233037 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-13 00:55:09.233044 | orchestrator | 2026-01-13 00:55:09.233051 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-13 00:55:09.233059 | orchestrator | Tuesday 13 January 2026 00:51:03 +0000 (0:00:02.308) 0:07:15.425 ******* 2026-01-13 00:55:09.233065 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.233072 | orchestrator | 2026-01-13 00:55:09.233079 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-13 00:55:09.233086 | orchestrator | Tuesday 13 January 2026 00:51:04 +0000 (0:00:00.826) 0:07:16.251 ******* 2026-01-13 00:55:09.233093 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b9be54a9-cd9c-568c-9220-61b18da052d9', 'data_vg': 'ceph-b9be54a9-cd9c-568c-9220-61b18da052d9'}) 2026-01-13 00:55:09.233105 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e91d200a-cf56-55df-b2f8-08f15361112f', 'data_vg': 'ceph-e91d200a-cf56-55df-b2f8-08f15361112f'}) 2026-01-13 00:55:09.233113 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5', 'data_vg': 'ceph-11aa5137-b5aa-5373-b4c1-0bd5a429c1a5'}) 2026-01-13 00:55:09.233120 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-03961d85-1922-5669-8251-0ccc6cca9fac', 'data_vg': 'ceph-03961d85-1922-5669-8251-0ccc6cca9fac'}) 2026-01-13 00:55:09.233127 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7ebda4f6-7b50-59b0-8273-b291dd7d1677', 'data_vg': 'ceph-7ebda4f6-7b50-59b0-8273-b291dd7d1677'}) 2026-01-13 00:55:09.233138 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2b3e8737-91e3-53c0-9b3a-5288a4111b63', 'data_vg': 'ceph-2b3e8737-91e3-53c0-9b3a-5288a4111b63'}) 2026-01-13 00:55:09.233145 | orchestrator | 2026-01-13 00:55:09.233152 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-13 00:55:09.233159 | orchestrator | Tuesday 13 January 2026 00:51:46 +0000 (0:00:42.301) 0:07:58.553 ******* 2026-01-13 00:55:09.233166 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233173 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.233180 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.233187 | orchestrator | 2026-01-13 00:55:09.233194 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-13 00:55:09.233201 | orchestrator | Tuesday 13 January 2026 00:51:46 +0000 (0:00:00.323) 0:07:58.876 ******* 2026-01-13 00:55:09.233209 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.233216 | orchestrator | 2026-01-13 00:55:09.233223 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-13 00:55:09.233230 | orchestrator | Tuesday 13 January 2026 00:51:47 +0000 (0:00:00.752) 0:07:59.629 ******* 2026-01-13 00:55:09.233236 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.233243 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.233249 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.233256 | orchestrator | 2026-01-13 00:55:09.233262 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-13 00:55:09.233268 | orchestrator | Tuesday 13 January 2026 00:51:48 +0000 (0:00:00.721) 0:08:00.351 ******* 2026-01-13 00:55:09.233274 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.233280 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.233285 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.233291 | orchestrator | 2026-01-13 00:55:09.233297 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-13 00:55:09.233303 | orchestrator | Tuesday 13 January 2026 00:51:51 +0000 (0:00:03.093) 0:08:03.444 ******* 2026-01-13 00:55:09.233309 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.233316 | orchestrator | 2026-01-13 00:55:09.233322 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-13 00:55:09.233328 | orchestrator | Tuesday 13 January 2026 00:51:52 +0000 (0:00:00.746) 0:08:04.191 ******* 2026-01-13 00:55:09.233334 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.233341 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.233347 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.233353 | orchestrator | 2026-01-13 00:55:09.233360 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-13 00:55:09.233366 | orchestrator | Tuesday 13 January 2026 00:51:53 +0000 (0:00:01.114) 0:08:05.306 ******* 2026-01-13 00:55:09.233373 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.233379 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.233384 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.233388 | orchestrator | 2026-01-13 00:55:09.233392 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-13 00:55:09.233396 | orchestrator | Tuesday 13 January 2026 00:51:54 +0000 (0:00:01.169) 0:08:06.475 ******* 2026-01-13 00:55:09.233400 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.233403 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.233407 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.233411 | orchestrator | 2026-01-13 00:55:09.233414 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-13 00:55:09.233419 | orchestrator | Tuesday 13 January 2026 00:51:56 +0000 (0:00:01.669) 0:08:08.144 ******* 2026-01-13 00:55:09.233425 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233431 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.233442 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.233448 | orchestrator | 2026-01-13 00:55:09.233455 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-13 00:55:09.233461 | orchestrator | Tuesday 13 January 2026 00:51:56 +0000 (0:00:00.597) 0:08:08.742 ******* 2026-01-13 00:55:09.233471 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233477 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.233483 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.233489 | orchestrator | 2026-01-13 00:55:09.233496 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-13 00:55:09.233502 | orchestrator | Tuesday 13 January 2026 00:51:56 +0000 (0:00:00.317) 0:08:09.059 ******* 2026-01-13 00:55:09.233507 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-13 00:55:09.233511 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-01-13 00:55:09.233517 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-01-13 00:55:09.233523 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-01-13 00:55:09.233530 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-01-13 00:55:09.233536 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-01-13 00:55:09.233542 | orchestrator | 2026-01-13 00:55:09.233549 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-13 00:55:09.233555 | orchestrator | Tuesday 13 January 2026 00:51:58 +0000 (0:00:01.078) 0:08:10.138 ******* 2026-01-13 00:55:09.233562 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-13 00:55:09.233568 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-13 00:55:09.233579 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-13 00:55:09.233585 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-01-13 00:55:09.233592 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-01-13 00:55:09.233598 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-13 00:55:09.233604 | orchestrator | 2026-01-13 00:55:09.233610 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-13 00:55:09.233617 | orchestrator | Tuesday 13 January 2026 00:52:00 +0000 (0:00:02.017) 0:08:12.155 ******* 2026-01-13 00:55:09.233623 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-13 00:55:09.233630 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-13 00:55:09.233635 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-13 00:55:09.233639 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-01-13 00:55:09.233642 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-01-13 00:55:09.233646 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-13 00:55:09.233650 | orchestrator | 2026-01-13 00:55:09.233653 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-13 00:55:09.233657 | orchestrator | Tuesday 13 January 2026 00:52:03 +0000 (0:00:03.878) 0:08:16.034 ******* 2026-01-13 00:55:09.233661 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233664 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.233668 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-13 00:55:09.233672 | orchestrator | 2026-01-13 00:55:09.233676 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-13 00:55:09.233679 | orchestrator | Tuesday 13 January 2026 00:52:07 +0000 (0:00:03.097) 0:08:19.132 ******* 2026-01-13 00:55:09.233683 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233687 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.233690 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-13 00:55:09.233694 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-13 00:55:09.233698 | orchestrator | 2026-01-13 00:55:09.233702 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-13 00:55:09.233705 | orchestrator | Tuesday 13 January 2026 00:52:19 +0000 (0:00:12.335) 0:08:31.467 ******* 2026-01-13 00:55:09.233709 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233713 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.233721 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.233725 | orchestrator | 2026-01-13 00:55:09.233729 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-13 00:55:09.233733 | orchestrator | Tuesday 13 January 2026 00:52:20 +0000 (0:00:01.044) 0:08:32.512 ******* 2026-01-13 00:55:09.233736 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233740 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.233744 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.233748 | orchestrator | 2026-01-13 00:55:09.233783 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-13 00:55:09.233789 | orchestrator | Tuesday 13 January 2026 00:52:20 +0000 (0:00:00.347) 0:08:32.859 ******* 2026-01-13 00:55:09.233795 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.233798 | orchestrator | 2026-01-13 00:55:09.233802 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-13 00:55:09.233806 | orchestrator | Tuesday 13 January 2026 00:52:21 +0000 (0:00:00.518) 0:08:33.378 ******* 2026-01-13 00:55:09.233810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:55:09.233814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:55:09.233817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:55:09.233821 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233825 | orchestrator | 2026-01-13 00:55:09.233829 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-13 00:55:09.233832 | orchestrator | Tuesday 13 January 2026 00:52:22 +0000 (0:00:00.944) 0:08:34.322 ******* 2026-01-13 00:55:09.233836 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233840 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.233844 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.233847 | orchestrator | 2026-01-13 00:55:09.233851 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-13 00:55:09.233855 | orchestrator | Tuesday 13 January 2026 00:52:22 +0000 (0:00:00.323) 0:08:34.646 ******* 2026-01-13 00:55:09.233859 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233862 | orchestrator | 2026-01-13 00:55:09.233866 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-13 00:55:09.233870 | orchestrator | Tuesday 13 January 2026 00:52:22 +0000 (0:00:00.243) 0:08:34.889 ******* 2026-01-13 00:55:09.233874 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233877 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.233884 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.233887 | orchestrator | 2026-01-13 00:55:09.233891 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-13 00:55:09.233895 | orchestrator | Tuesday 13 January 2026 00:52:23 +0000 (0:00:00.307) 0:08:35.197 ******* 2026-01-13 00:55:09.233899 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233903 | orchestrator | 2026-01-13 00:55:09.233906 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-13 00:55:09.233910 | orchestrator | Tuesday 13 January 2026 00:52:23 +0000 (0:00:00.245) 0:08:35.442 ******* 2026-01-13 00:55:09.233914 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233917 | orchestrator | 2026-01-13 00:55:09.233921 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-13 00:55:09.233925 | orchestrator | Tuesday 13 January 2026 00:52:23 +0000 (0:00:00.225) 0:08:35.668 ******* 2026-01-13 00:55:09.233929 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233932 | orchestrator | 2026-01-13 00:55:09.233936 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-13 00:55:09.233943 | orchestrator | Tuesday 13 January 2026 00:52:23 +0000 (0:00:00.116) 0:08:35.785 ******* 2026-01-13 00:55:09.233947 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233950 | orchestrator | 2026-01-13 00:55:09.233954 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-13 00:55:09.233961 | orchestrator | Tuesday 13 January 2026 00:52:23 +0000 (0:00:00.199) 0:08:35.984 ******* 2026-01-13 00:55:09.233964 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233968 | orchestrator | 2026-01-13 00:55:09.233972 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-13 00:55:09.233976 | orchestrator | Tuesday 13 January 2026 00:52:24 +0000 (0:00:00.856) 0:08:36.840 ******* 2026-01-13 00:55:09.233979 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:55:09.233983 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:55:09.233987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:55:09.233991 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.233994 | orchestrator | 2026-01-13 00:55:09.233998 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-13 00:55:09.234002 | orchestrator | Tuesday 13 January 2026 00:52:25 +0000 (0:00:00.412) 0:08:37.252 ******* 2026-01-13 00:55:09.234006 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.234009 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.234035 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.234040 | orchestrator | 2026-01-13 00:55:09.234043 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-13 00:55:09.234047 | orchestrator | Tuesday 13 January 2026 00:52:25 +0000 (0:00:00.356) 0:08:37.609 ******* 2026-01-13 00:55:09.234051 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.234055 | orchestrator | 2026-01-13 00:55:09.234059 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-13 00:55:09.234062 | orchestrator | Tuesday 13 January 2026 00:52:25 +0000 (0:00:00.214) 0:08:37.823 ******* 2026-01-13 00:55:09.234066 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.234070 | orchestrator | 2026-01-13 00:55:09.234074 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-13 00:55:09.234077 | orchestrator | 2026-01-13 00:55:09.234081 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-13 00:55:09.234085 | orchestrator | Tuesday 13 January 2026 00:52:26 +0000 (0:00:00.966) 0:08:38.789 ******* 2026-01-13 00:55:09.234089 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.234093 | orchestrator | 2026-01-13 00:55:09.234097 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-13 00:55:09.234101 | orchestrator | Tuesday 13 January 2026 00:52:27 +0000 (0:00:01.276) 0:08:40.066 ******* 2026-01-13 00:55:09.234105 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.234108 | orchestrator | 2026-01-13 00:55:09.234112 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-13 00:55:09.234116 | orchestrator | Tuesday 13 January 2026 00:52:29 +0000 (0:00:01.033) 0:08:41.100 ******* 2026-01-13 00:55:09.234120 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.234123 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.234127 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.234131 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.234135 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.234138 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.234142 | orchestrator | 2026-01-13 00:55:09.234146 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-13 00:55:09.234150 | orchestrator | Tuesday 13 January 2026 00:52:30 +0000 (0:00:01.257) 0:08:42.358 ******* 2026-01-13 00:55:09.234153 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.234157 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.234161 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.234165 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.234171 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.234175 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.234179 | orchestrator | 2026-01-13 00:55:09.234183 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-13 00:55:09.234186 | orchestrator | Tuesday 13 January 2026 00:52:31 +0000 (0:00:00.833) 0:08:43.191 ******* 2026-01-13 00:55:09.234190 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.234194 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.234198 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.234201 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.234205 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.234209 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.234213 | orchestrator | 2026-01-13 00:55:09.234216 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-13 00:55:09.234222 | orchestrator | Tuesday 13 January 2026 00:52:32 +0000 (0:00:01.063) 0:08:44.255 ******* 2026-01-13 00:55:09.234226 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.234230 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.234233 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.234237 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.234241 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.234245 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.234248 | orchestrator | 2026-01-13 00:55:09.234252 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-13 00:55:09.234256 | orchestrator | Tuesday 13 January 2026 00:52:33 +0000 (0:00:00.868) 0:08:45.124 ******* 2026-01-13 00:55:09.234260 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.234263 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.234267 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.234271 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.234274 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.234278 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.234282 | orchestrator | 2026-01-13 00:55:09.234286 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-13 00:55:09.234292 | orchestrator | Tuesday 13 January 2026 00:52:34 +0000 (0:00:01.464) 0:08:46.589 ******* 2026-01-13 00:55:09.234296 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.234300 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.234304 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.234307 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.234311 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.234315 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.234318 | orchestrator | 2026-01-13 00:55:09.234322 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-13 00:55:09.234326 | orchestrator | Tuesday 13 January 2026 00:52:35 +0000 (0:00:00.589) 0:08:47.179 ******* 2026-01-13 00:55:09.234330 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.234333 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.234337 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.234341 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.234345 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.234348 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.234352 | orchestrator | 2026-01-13 00:55:09.234356 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-13 00:55:09.234360 | orchestrator | Tuesday 13 January 2026 00:52:36 +0000 (0:00:00.993) 0:08:48.173 ******* 2026-01-13 00:55:09.234364 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.234367 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.234371 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.234375 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.234379 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.234382 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.234386 | orchestrator | 2026-01-13 00:55:09.234390 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-13 00:55:09.234396 | orchestrator | Tuesday 13 January 2026 00:52:37 +0000 (0:00:01.163) 0:08:49.336 ******* 2026-01-13 00:55:09.234400 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.234403 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.234407 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.234411 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.234415 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.234418 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.234422 | orchestrator | 2026-01-13 00:55:09.234426 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-13 00:55:09.234430 | orchestrator | Tuesday 13 January 2026 00:52:38 +0000 (0:00:01.219) 0:08:50.556 ******* 2026-01-13 00:55:09.234433 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.234437 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.234441 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.234445 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.234448 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.234452 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.234456 | orchestrator | 2026-01-13 00:55:09.234460 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-13 00:55:09.234463 | orchestrator | Tuesday 13 January 2026 00:52:39 +0000 (0:00:00.584) 0:08:51.140 ******* 2026-01-13 00:55:09.234467 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.234471 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.234475 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.234478 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.234482 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.234486 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.234490 | orchestrator | 2026-01-13 00:55:09.234493 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-13 00:55:09.234497 | orchestrator | Tuesday 13 January 2026 00:52:39 +0000 (0:00:00.700) 0:08:51.841 ******* 2026-01-13 00:55:09.234501 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.234505 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.234508 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.234512 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.234516 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.234520 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.234523 | orchestrator | 2026-01-13 00:55:09.234527 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-13 00:55:09.234531 | orchestrator | Tuesday 13 January 2026 00:52:40 +0000 (0:00:00.519) 0:08:52.360 ******* 2026-01-13 00:55:09.234535 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.234538 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.234542 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.234546 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.234550 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.234553 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.234557 | orchestrator | 2026-01-13 00:55:09.234561 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-13 00:55:09.234565 | orchestrator | Tuesday 13 January 2026 00:52:41 +0000 (0:00:00.735) 0:08:53.096 ******* 2026-01-13 00:55:09.234568 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.234572 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.234576 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.234579 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.234583 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.234587 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.234591 | orchestrator | 2026-01-13 00:55:09.234595 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-13 00:55:09.234599 | orchestrator | Tuesday 13 January 2026 00:52:41 +0000 (0:00:00.537) 0:08:53.634 ******* 2026-01-13 00:55:09.234603 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.234606 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.234612 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.234616 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.234620 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.234623 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.234627 | orchestrator | 2026-01-13 00:55:09.234668 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-13 00:55:09.234682 | orchestrator | Tuesday 13 January 2026 00:52:42 +0000 (0:00:00.727) 0:08:54.361 ******* 2026-01-13 00:55:09.234686 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.234690 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.234693 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.234697 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:09.234701 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:09.234704 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:09.234708 | orchestrator | 2026-01-13 00:55:09.234712 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-13 00:55:09.234719 | orchestrator | Tuesday 13 January 2026 00:52:42 +0000 (0:00:00.509) 0:08:54.871 ******* 2026-01-13 00:55:09.234723 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.234726 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.234730 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.234734 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.234738 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.234741 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.234745 | orchestrator | 2026-01-13 00:55:09.234759 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-13 00:55:09.234764 | orchestrator | Tuesday 13 January 2026 00:52:43 +0000 (0:00:00.824) 0:08:55.695 ******* 2026-01-13 00:55:09.234767 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.234771 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.234775 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.234778 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.234782 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.234786 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.234789 | orchestrator | 2026-01-13 00:55:09.234793 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-13 00:55:09.234797 | orchestrator | Tuesday 13 January 2026 00:52:44 +0000 (0:00:00.476) 0:08:56.172 ******* 2026-01-13 00:55:09.234801 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.234804 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.234808 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.234812 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.234815 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.234819 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.234823 | orchestrator | 2026-01-13 00:55:09.234826 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-13 00:55:09.234830 | orchestrator | Tuesday 13 January 2026 00:52:45 +0000 (0:00:01.063) 0:08:57.236 ******* 2026-01-13 00:55:09.234834 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-13 00:55:09.234837 | orchestrator | 2026-01-13 00:55:09.234841 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-13 00:55:09.234845 | orchestrator | Tuesday 13 January 2026 00:52:49 +0000 (0:00:04.212) 0:09:01.448 ******* 2026-01-13 00:55:09.234849 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-13 00:55:09.234852 | orchestrator | 2026-01-13 00:55:09.234856 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-13 00:55:09.234860 | orchestrator | Tuesday 13 January 2026 00:52:51 +0000 (0:00:02.054) 0:09:03.503 ******* 2026-01-13 00:55:09.234864 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.234867 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.234871 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.234875 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.234878 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.234882 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.234889 | orchestrator | 2026-01-13 00:55:09.234892 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-13 00:55:09.234896 | orchestrator | Tuesday 13 January 2026 00:52:53 +0000 (0:00:01.895) 0:09:05.398 ******* 2026-01-13 00:55:09.234900 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.234903 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.234907 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.234911 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.234915 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.234918 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.234922 | orchestrator | 2026-01-13 00:55:09.234926 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-13 00:55:09.234929 | orchestrator | Tuesday 13 January 2026 00:52:54 +0000 (0:00:01.138) 0:09:06.537 ******* 2026-01-13 00:55:09.234933 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.234938 | orchestrator | 2026-01-13 00:55:09.234942 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-13 00:55:09.234946 | orchestrator | Tuesday 13 January 2026 00:52:55 +0000 (0:00:01.049) 0:09:07.587 ******* 2026-01-13 00:55:09.234949 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.234953 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.234957 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.234960 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.234964 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.234968 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.234971 | orchestrator | 2026-01-13 00:55:09.234975 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-13 00:55:09.234979 | orchestrator | Tuesday 13 January 2026 00:52:57 +0000 (0:00:01.816) 0:09:09.403 ******* 2026-01-13 00:55:09.234982 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.234986 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.234990 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.234994 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.234997 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.235003 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.235007 | orchestrator | 2026-01-13 00:55:09.235010 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-13 00:55:09.235014 | orchestrator | Tuesday 13 January 2026 00:53:00 +0000 (0:00:03.410) 0:09:12.814 ******* 2026-01-13 00:55:09.235018 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:09.235022 | orchestrator | 2026-01-13 00:55:09.235025 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-13 00:55:09.235029 | orchestrator | Tuesday 13 January 2026 00:53:02 +0000 (0:00:01.388) 0:09:14.202 ******* 2026-01-13 00:55:09.235033 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.235037 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.235040 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.235044 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.235048 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.235051 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.235055 | orchestrator | 2026-01-13 00:55:09.235059 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-13 00:55:09.235065 | orchestrator | Tuesday 13 January 2026 00:53:02 +0000 (0:00:00.743) 0:09:14.946 ******* 2026-01-13 00:55:09.235069 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.235073 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.235077 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.235080 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:09.235084 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:09.235088 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:09.235094 | orchestrator | 2026-01-13 00:55:09.235098 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-13 00:55:09.235101 | orchestrator | Tuesday 13 January 2026 00:53:05 +0000 (0:00:02.606) 0:09:17.552 ******* 2026-01-13 00:55:09.235105 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.235109 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.235112 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.235116 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:09.235120 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:09.235124 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:09.235127 | orchestrator | 2026-01-13 00:55:09.235131 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-13 00:55:09.235135 | orchestrator | 2026-01-13 00:55:09.235138 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-13 00:55:09.235142 | orchestrator | Tuesday 13 January 2026 00:53:06 +0000 (0:00:00.924) 0:09:18.477 ******* 2026-01-13 00:55:09.235146 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.235150 | orchestrator | 2026-01-13 00:55:09.235153 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-13 00:55:09.235157 | orchestrator | Tuesday 13 January 2026 00:53:06 +0000 (0:00:00.441) 0:09:18.919 ******* 2026-01-13 00:55:09.235161 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.235165 | orchestrator | 2026-01-13 00:55:09.235168 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-13 00:55:09.235172 | orchestrator | Tuesday 13 January 2026 00:53:07 +0000 (0:00:00.665) 0:09:19.585 ******* 2026-01-13 00:55:09.235176 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.235180 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.235183 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.235187 | orchestrator | 2026-01-13 00:55:09.235191 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-13 00:55:09.235195 | orchestrator | Tuesday 13 January 2026 00:53:07 +0000 (0:00:00.298) 0:09:19.884 ******* 2026-01-13 00:55:09.235198 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.235202 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.235206 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.235209 | orchestrator | 2026-01-13 00:55:09.235213 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-13 00:55:09.235217 | orchestrator | Tuesday 13 January 2026 00:53:08 +0000 (0:00:00.662) 0:09:20.546 ******* 2026-01-13 00:55:09.235220 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.235224 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.235228 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.235232 | orchestrator | 2026-01-13 00:55:09.235235 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-13 00:55:09.235239 | orchestrator | Tuesday 13 January 2026 00:53:09 +0000 (0:00:00.998) 0:09:21.545 ******* 2026-01-13 00:55:09.235243 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.235246 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.235250 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.235260 | orchestrator | 2026-01-13 00:55:09.235264 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-13 00:55:09.235272 | orchestrator | Tuesday 13 January 2026 00:53:10 +0000 (0:00:00.718) 0:09:22.263 ******* 2026-01-13 00:55:09.235275 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.235279 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.235283 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.235287 | orchestrator | 2026-01-13 00:55:09.235290 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-13 00:55:09.235294 | orchestrator | Tuesday 13 January 2026 00:53:10 +0000 (0:00:00.333) 0:09:22.596 ******* 2026-01-13 00:55:09.235298 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.235304 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.235307 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.235311 | orchestrator | 2026-01-13 00:55:09.235315 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-13 00:55:09.235319 | orchestrator | Tuesday 13 January 2026 00:53:10 +0000 (0:00:00.321) 0:09:22.918 ******* 2026-01-13 00:55:09.235322 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.235326 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.235330 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.235333 | orchestrator | 2026-01-13 00:55:09.235337 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-13 00:55:09.235342 | orchestrator | Tuesday 13 January 2026 00:53:11 +0000 (0:00:00.580) 0:09:23.499 ******* 2026-01-13 00:55:09.235346 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.235350 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.235354 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.235357 | orchestrator | 2026-01-13 00:55:09.235361 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-13 00:55:09.235365 | orchestrator | Tuesday 13 January 2026 00:53:12 +0000 (0:00:00.704) 0:09:24.204 ******* 2026-01-13 00:55:09.235369 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.235372 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.235376 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.235380 | orchestrator | 2026-01-13 00:55:09.235383 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-13 00:55:09.235387 | orchestrator | Tuesday 13 January 2026 00:53:12 +0000 (0:00:00.786) 0:09:24.990 ******* 2026-01-13 00:55:09.235391 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.235395 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.235398 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.235402 | orchestrator | 2026-01-13 00:55:09.235406 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-13 00:55:09.235412 | orchestrator | Tuesday 13 January 2026 00:53:13 +0000 (0:00:00.318) 0:09:25.309 ******* 2026-01-13 00:55:09.235416 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.235420 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.235423 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.235427 | orchestrator | 2026-01-13 00:55:09.235431 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-13 00:55:09.235434 | orchestrator | Tuesday 13 January 2026 00:53:13 +0000 (0:00:00.574) 0:09:25.883 ******* 2026-01-13 00:55:09.235438 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.235442 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.235446 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.235449 | orchestrator | 2026-01-13 00:55:09.235453 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-13 00:55:09.235457 | orchestrator | Tuesday 13 January 2026 00:53:14 +0000 (0:00:00.342) 0:09:26.225 ******* 2026-01-13 00:55:09.235460 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.235464 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.235468 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.235472 | orchestrator | 2026-01-13 00:55:09.235475 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-13 00:55:09.235479 | orchestrator | Tuesday 13 January 2026 00:53:14 +0000 (0:00:00.338) 0:09:26.564 ******* 2026-01-13 00:55:09.235483 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.235486 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.235490 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.235494 | orchestrator | 2026-01-13 00:55:09.235497 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-13 00:55:09.235501 | orchestrator | Tuesday 13 January 2026 00:53:14 +0000 (0:00:00.335) 0:09:26.900 ******* 2026-01-13 00:55:09.235505 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.235509 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.235512 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.235519 | orchestrator | 2026-01-13 00:55:09.235522 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-13 00:55:09.235526 | orchestrator | Tuesday 13 January 2026 00:53:15 +0000 (0:00:00.667) 0:09:27.568 ******* 2026-01-13 00:55:09.235530 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.235533 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.235537 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.235541 | orchestrator | 2026-01-13 00:55:09.235545 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-13 00:55:09.235548 | orchestrator | Tuesday 13 January 2026 00:53:15 +0000 (0:00:00.317) 0:09:27.885 ******* 2026-01-13 00:55:09.235552 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.235556 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.235559 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.235563 | orchestrator | 2026-01-13 00:55:09.235567 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-13 00:55:09.235570 | orchestrator | Tuesday 13 January 2026 00:53:16 +0000 (0:00:00.333) 0:09:28.218 ******* 2026-01-13 00:55:09.235574 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.235578 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.235582 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.235585 | orchestrator | 2026-01-13 00:55:09.235589 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-13 00:55:09.235593 | orchestrator | Tuesday 13 January 2026 00:53:16 +0000 (0:00:00.317) 0:09:28.536 ******* 2026-01-13 00:55:09.235597 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.235600 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.235604 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.235608 | orchestrator | 2026-01-13 00:55:09.235611 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-13 00:55:09.235615 | orchestrator | Tuesday 13 January 2026 00:53:17 +0000 (0:00:00.823) 0:09:29.360 ******* 2026-01-13 00:55:09.235619 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.235623 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.235626 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-13 00:55:09.235630 | orchestrator | 2026-01-13 00:55:09.235634 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-13 00:55:09.235638 | orchestrator | Tuesday 13 January 2026 00:53:17 +0000 (0:00:00.386) 0:09:29.746 ******* 2026-01-13 00:55:09.235641 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-13 00:55:09.235645 | orchestrator | 2026-01-13 00:55:09.235649 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-13 00:55:09.235652 | orchestrator | Tuesday 13 January 2026 00:53:19 +0000 (0:00:02.127) 0:09:31.874 ******* 2026-01-13 00:55:09.235657 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-13 00:55:09.235664 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.235667 | orchestrator | 2026-01-13 00:55:09.235672 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-13 00:55:09.235675 | orchestrator | Tuesday 13 January 2026 00:53:20 +0000 (0:00:00.223) 0:09:32.097 ******* 2026-01-13 00:55:09.235680 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-13 00:55:09.235687 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-13 00:55:09.235693 | orchestrator | 2026-01-13 00:55:09.235699 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-13 00:55:09.235703 | orchestrator | Tuesday 13 January 2026 00:53:28 +0000 (0:00:08.560) 0:09:40.657 ******* 2026-01-13 00:55:09.235706 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-13 00:55:09.235710 | orchestrator | 2026-01-13 00:55:09.235714 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-13 00:55:09.235718 | orchestrator | Tuesday 13 January 2026 00:53:32 +0000 (0:00:03.786) 0:09:44.444 ******* 2026-01-13 00:55:09.235721 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.235725 | orchestrator | 2026-01-13 00:55:09.235729 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-13 00:55:09.235732 | orchestrator | Tuesday 13 January 2026 00:53:32 +0000 (0:00:00.581) 0:09:45.026 ******* 2026-01-13 00:55:09.235736 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-13 00:55:09.235740 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-13 00:55:09.235744 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-13 00:55:09.235747 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-13 00:55:09.235761 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-13 00:55:09.235765 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-13 00:55:09.235768 | orchestrator | 2026-01-13 00:55:09.235772 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-13 00:55:09.235776 | orchestrator | Tuesday 13 January 2026 00:53:33 +0000 (0:00:01.009) 0:09:46.036 ******* 2026-01-13 00:55:09.235780 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:55:09.235783 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-13 00:55:09.235787 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-13 00:55:09.235791 | orchestrator | 2026-01-13 00:55:09.235794 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-13 00:55:09.235798 | orchestrator | Tuesday 13 January 2026 00:53:36 +0000 (0:00:02.379) 0:09:48.415 ******* 2026-01-13 00:55:09.235802 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-13 00:55:09.235806 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-13 00:55:09.235810 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.235813 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-13 00:55:09.235817 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-13 00:55:09.235821 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.235824 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-13 00:55:09.235828 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-13 00:55:09.235832 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.235836 | orchestrator | 2026-01-13 00:55:09.235839 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-13 00:55:09.235843 | orchestrator | Tuesday 13 January 2026 00:53:37 +0000 (0:00:01.588) 0:09:50.004 ******* 2026-01-13 00:55:09.235847 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.235851 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.235854 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.235858 | orchestrator | 2026-01-13 00:55:09.235862 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-13 00:55:09.235866 | orchestrator | Tuesday 13 January 2026 00:53:41 +0000 (0:00:03.348) 0:09:53.352 ******* 2026-01-13 00:55:09.235869 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.235873 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.235877 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.235880 | orchestrator | 2026-01-13 00:55:09.235884 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-13 00:55:09.235918 | orchestrator | Tuesday 13 January 2026 00:53:41 +0000 (0:00:00.306) 0:09:53.659 ******* 2026-01-13 00:55:09.235922 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.235926 | orchestrator | 2026-01-13 00:55:09.235930 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-13 00:55:09.235933 | orchestrator | Tuesday 13 January 2026 00:53:42 +0000 (0:00:00.814) 0:09:54.474 ******* 2026-01-13 00:55:09.235937 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.235941 | orchestrator | 2026-01-13 00:55:09.235945 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-13 00:55:09.235951 | orchestrator | Tuesday 13 January 2026 00:53:42 +0000 (0:00:00.530) 0:09:55.004 ******* 2026-01-13 00:55:09.235954 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.235958 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.235962 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.235966 | orchestrator | 2026-01-13 00:55:09.235969 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-13 00:55:09.235973 | orchestrator | Tuesday 13 January 2026 00:53:44 +0000 (0:00:01.189) 0:09:56.194 ******* 2026-01-13 00:55:09.235977 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.235981 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.235984 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.235988 | orchestrator | 2026-01-13 00:55:09.235992 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-13 00:55:09.235996 | orchestrator | Tuesday 13 January 2026 00:53:45 +0000 (0:00:01.385) 0:09:57.579 ******* 2026-01-13 00:55:09.235999 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.236003 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.236007 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.236011 | orchestrator | 2026-01-13 00:55:09.236014 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-13 00:55:09.236020 | orchestrator | Tuesday 13 January 2026 00:53:47 +0000 (0:00:01.822) 0:09:59.401 ******* 2026-01-13 00:55:09.236024 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.236028 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.236032 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.236035 | orchestrator | 2026-01-13 00:55:09.236039 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-13 00:55:09.236043 | orchestrator | Tuesday 13 January 2026 00:53:49 +0000 (0:00:01.998) 0:10:01.400 ******* 2026-01-13 00:55:09.236047 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.236050 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.236054 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.236058 | orchestrator | 2026-01-13 00:55:09.236062 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-13 00:55:09.236065 | orchestrator | Tuesday 13 January 2026 00:53:50 +0000 (0:00:01.684) 0:10:03.085 ******* 2026-01-13 00:55:09.236069 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.236073 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.236077 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.236081 | orchestrator | 2026-01-13 00:55:09.236084 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-13 00:55:09.236088 | orchestrator | Tuesday 13 January 2026 00:53:51 +0000 (0:00:00.711) 0:10:03.796 ******* 2026-01-13 00:55:09.236092 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.236095 | orchestrator | 2026-01-13 00:55:09.236099 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-13 00:55:09.236103 | orchestrator | Tuesday 13 January 2026 00:53:52 +0000 (0:00:00.861) 0:10:04.658 ******* 2026-01-13 00:55:09.236107 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.236113 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.236117 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.236120 | orchestrator | 2026-01-13 00:55:09.236124 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-13 00:55:09.236128 | orchestrator | Tuesday 13 January 2026 00:53:52 +0000 (0:00:00.342) 0:10:05.000 ******* 2026-01-13 00:55:09.236132 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.236135 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.236139 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.236143 | orchestrator | 2026-01-13 00:55:09.236146 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-13 00:55:09.236150 | orchestrator | Tuesday 13 January 2026 00:53:54 +0000 (0:00:01.186) 0:10:06.186 ******* 2026-01-13 00:55:09.236154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:55:09.236158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:55:09.236161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:55:09.236165 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.236169 | orchestrator | 2026-01-13 00:55:09.236173 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-13 00:55:09.236176 | orchestrator | Tuesday 13 January 2026 00:53:55 +0000 (0:00:00.984) 0:10:07.171 ******* 2026-01-13 00:55:09.236180 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.236184 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.236187 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.236191 | orchestrator | 2026-01-13 00:55:09.236195 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-13 00:55:09.236199 | orchestrator | 2026-01-13 00:55:09.236203 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-13 00:55:09.236206 | orchestrator | Tuesday 13 January 2026 00:53:55 +0000 (0:00:00.878) 0:10:08.049 ******* 2026-01-13 00:55:09.236210 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.236214 | orchestrator | 2026-01-13 00:55:09.236218 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-13 00:55:09.236221 | orchestrator | Tuesday 13 January 2026 00:53:56 +0000 (0:00:00.500) 0:10:08.550 ******* 2026-01-13 00:55:09.236225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.236229 | orchestrator | 2026-01-13 00:55:09.236232 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-13 00:55:09.236236 | orchestrator | Tuesday 13 January 2026 00:53:57 +0000 (0:00:00.873) 0:10:09.425 ******* 2026-01-13 00:55:09.236240 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.236244 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.236247 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.236251 | orchestrator | 2026-01-13 00:55:09.236255 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-13 00:55:09.236259 | orchestrator | Tuesday 13 January 2026 00:53:57 +0000 (0:00:00.310) 0:10:09.736 ******* 2026-01-13 00:55:09.236264 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.236268 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.236272 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.236275 | orchestrator | 2026-01-13 00:55:09.236279 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-13 00:55:09.236283 | orchestrator | Tuesday 13 January 2026 00:53:58 +0000 (0:00:00.680) 0:10:10.417 ******* 2026-01-13 00:55:09.236287 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.236290 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.236294 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.236298 | orchestrator | 2026-01-13 00:55:09.236301 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-13 00:55:09.236305 | orchestrator | Tuesday 13 January 2026 00:53:59 +0000 (0:00:01.043) 0:10:11.461 ******* 2026-01-13 00:55:09.236311 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.236315 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.236319 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.236322 | orchestrator | 2026-01-13 00:55:09.236326 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-13 00:55:09.236330 | orchestrator | Tuesday 13 January 2026 00:54:00 +0000 (0:00:00.739) 0:10:12.200 ******* 2026-01-13 00:55:09.236334 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.236340 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.236344 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.236347 | orchestrator | 2026-01-13 00:55:09.236351 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-13 00:55:09.236355 | orchestrator | Tuesday 13 January 2026 00:54:00 +0000 (0:00:00.312) 0:10:12.512 ******* 2026-01-13 00:55:09.236359 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.236362 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.236366 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.236370 | orchestrator | 2026-01-13 00:55:09.236373 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-13 00:55:09.236377 | orchestrator | Tuesday 13 January 2026 00:54:00 +0000 (0:00:00.334) 0:10:12.847 ******* 2026-01-13 00:55:09.236381 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.236385 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.236388 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.236392 | orchestrator | 2026-01-13 00:55:09.236396 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-13 00:55:09.236400 | orchestrator | Tuesday 13 January 2026 00:54:01 +0000 (0:00:00.583) 0:10:13.430 ******* 2026-01-13 00:55:09.236403 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.236407 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.236411 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.236414 | orchestrator | 2026-01-13 00:55:09.236418 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-13 00:55:09.236422 | orchestrator | Tuesday 13 January 2026 00:54:02 +0000 (0:00:00.754) 0:10:14.185 ******* 2026-01-13 00:55:09.236426 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.236429 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.236433 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.236437 | orchestrator | 2026-01-13 00:55:09.236441 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-13 00:55:09.236444 | orchestrator | Tuesday 13 January 2026 00:54:02 +0000 (0:00:00.883) 0:10:15.069 ******* 2026-01-13 00:55:09.236448 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.236452 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.236455 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.236459 | orchestrator | 2026-01-13 00:55:09.236463 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-13 00:55:09.236466 | orchestrator | Tuesday 13 January 2026 00:54:03 +0000 (0:00:00.337) 0:10:15.406 ******* 2026-01-13 00:55:09.236470 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.236474 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.236478 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.236481 | orchestrator | 2026-01-13 00:55:09.236485 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-13 00:55:09.236489 | orchestrator | Tuesday 13 January 2026 00:54:03 +0000 (0:00:00.676) 0:10:16.082 ******* 2026-01-13 00:55:09.236492 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.236496 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.236500 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.236503 | orchestrator | 2026-01-13 00:55:09.236507 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-13 00:55:09.236511 | orchestrator | Tuesday 13 January 2026 00:54:04 +0000 (0:00:00.379) 0:10:16.462 ******* 2026-01-13 00:55:09.236515 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.236522 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.236526 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.236530 | orchestrator | 2026-01-13 00:55:09.236534 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-13 00:55:09.236537 | orchestrator | Tuesday 13 January 2026 00:54:04 +0000 (0:00:00.357) 0:10:16.820 ******* 2026-01-13 00:55:09.236541 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.236545 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.236549 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.236552 | orchestrator | 2026-01-13 00:55:09.236556 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-13 00:55:09.236560 | orchestrator | Tuesday 13 January 2026 00:54:05 +0000 (0:00:00.320) 0:10:17.140 ******* 2026-01-13 00:55:09.236563 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.236567 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.236571 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.236575 | orchestrator | 2026-01-13 00:55:09.236578 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-13 00:55:09.236582 | orchestrator | Tuesday 13 January 2026 00:54:05 +0000 (0:00:00.631) 0:10:17.772 ******* 2026-01-13 00:55:09.236586 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.236589 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.236593 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.236597 | orchestrator | 2026-01-13 00:55:09.236601 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-13 00:55:09.236604 | orchestrator | Tuesday 13 January 2026 00:54:05 +0000 (0:00:00.303) 0:10:18.075 ******* 2026-01-13 00:55:09.236608 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.236612 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.236617 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.236621 | orchestrator | 2026-01-13 00:55:09.236625 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-13 00:55:09.236628 | orchestrator | Tuesday 13 January 2026 00:54:06 +0000 (0:00:00.320) 0:10:18.396 ******* 2026-01-13 00:55:09.236632 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.236636 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.236639 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.236643 | orchestrator | 2026-01-13 00:55:09.236647 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-13 00:55:09.236651 | orchestrator | Tuesday 13 January 2026 00:54:06 +0000 (0:00:00.320) 0:10:18.716 ******* 2026-01-13 00:55:09.236654 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.236658 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.236662 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.236665 | orchestrator | 2026-01-13 00:55:09.236669 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-13 00:55:09.236673 | orchestrator | Tuesday 13 January 2026 00:54:07 +0000 (0:00:00.808) 0:10:19.524 ******* 2026-01-13 00:55:09.236679 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.236683 | orchestrator | 2026-01-13 00:55:09.236686 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-13 00:55:09.236690 | orchestrator | Tuesday 13 January 2026 00:54:07 +0000 (0:00:00.520) 0:10:20.045 ******* 2026-01-13 00:55:09.236694 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:55:09.236698 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-13 00:55:09.236701 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-13 00:55:09.236705 | orchestrator | 2026-01-13 00:55:09.236709 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-13 00:55:09.236712 | orchestrator | Tuesday 13 January 2026 00:54:10 +0000 (0:00:02.284) 0:10:22.329 ******* 2026-01-13 00:55:09.236716 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-13 00:55:09.236720 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-13 00:55:09.236726 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.236730 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-13 00:55:09.236734 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-13 00:55:09.236737 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.236741 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-13 00:55:09.236745 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-13 00:55:09.236748 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.236762 | orchestrator | 2026-01-13 00:55:09.236766 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-13 00:55:09.236769 | orchestrator | Tuesday 13 January 2026 00:54:11 +0000 (0:00:01.522) 0:10:23.852 ******* 2026-01-13 00:55:09.236773 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.236777 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.236781 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.236784 | orchestrator | 2026-01-13 00:55:09.236788 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-13 00:55:09.236792 | orchestrator | Tuesday 13 January 2026 00:54:12 +0000 (0:00:00.334) 0:10:24.186 ******* 2026-01-13 00:55:09.236795 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.236799 | orchestrator | 2026-01-13 00:55:09.236803 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-13 00:55:09.236807 | orchestrator | Tuesday 13 January 2026 00:54:12 +0000 (0:00:00.545) 0:10:24.732 ******* 2026-01-13 00:55:09.236811 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.236814 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.236818 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.236822 | orchestrator | 2026-01-13 00:55:09.236826 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-13 00:55:09.236830 | orchestrator | Tuesday 13 January 2026 00:54:14 +0000 (0:00:01.429) 0:10:26.161 ******* 2026-01-13 00:55:09.236833 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:55:09.236837 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-13 00:55:09.236841 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:55:09.236845 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-13 00:55:09.236848 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:55:09.236852 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-13 00:55:09.236856 | orchestrator | 2026-01-13 00:55:09.236860 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-13 00:55:09.236863 | orchestrator | Tuesday 13 January 2026 00:54:17 +0000 (0:00:03.924) 0:10:30.085 ******* 2026-01-13 00:55:09.236867 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:55:09.236879 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-13 00:55:09.236883 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:55:09.236887 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-13 00:55:09.236892 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:55:09.236899 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-13 00:55:09.236902 | orchestrator | 2026-01-13 00:55:09.236906 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-13 00:55:09.236910 | orchestrator | Tuesday 13 January 2026 00:54:20 +0000 (0:00:02.193) 0:10:32.279 ******* 2026-01-13 00:55:09.236913 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-13 00:55:09.236917 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.236921 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-13 00:55:09.236925 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.236928 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-13 00:55:09.236932 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.236936 | orchestrator | 2026-01-13 00:55:09.236942 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-13 00:55:09.236946 | orchestrator | Tuesday 13 January 2026 00:54:21 +0000 (0:00:01.082) 0:10:33.361 ******* 2026-01-13 00:55:09.236949 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-13 00:55:09.236953 | orchestrator | 2026-01-13 00:55:09.236957 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-13 00:55:09.236960 | orchestrator | Tuesday 13 January 2026 00:54:21 +0000 (0:00:00.229) 0:10:33.591 ******* 2026-01-13 00:55:09.236964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-13 00:55:09.236968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-13 00:55:09.236972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-13 00:55:09.236976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-13 00:55:09.236979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-13 00:55:09.236983 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.236987 | orchestrator | 2026-01-13 00:55:09.236991 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-13 00:55:09.236994 | orchestrator | Tuesday 13 January 2026 00:54:22 +0000 (0:00:01.148) 0:10:34.739 ******* 2026-01-13 00:55:09.236998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-13 00:55:09.237002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-13 00:55:09.237005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-13 00:55:09.237009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-13 00:55:09.237013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-13 00:55:09.237017 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.237020 | orchestrator | 2026-01-13 00:55:09.237024 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-13 00:55:09.237028 | orchestrator | Tuesday 13 January 2026 00:54:23 +0000 (0:00:00.594) 0:10:35.334 ******* 2026-01-13 00:55:09.237032 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-13 00:55:09.237035 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-13 00:55:09.237041 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-13 00:55:09.237045 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-13 00:55:09.237049 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-13 00:55:09.237052 | orchestrator | 2026-01-13 00:55:09.237056 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-13 00:55:09.237060 | orchestrator | Tuesday 13 January 2026 00:54:53 +0000 (0:00:30.369) 0:11:05.703 ******* 2026-01-13 00:55:09.237064 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.237067 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.237073 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.237077 | orchestrator | 2026-01-13 00:55:09.237081 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-13 00:55:09.237084 | orchestrator | Tuesday 13 January 2026 00:54:53 +0000 (0:00:00.327) 0:11:06.030 ******* 2026-01-13 00:55:09.237088 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.237092 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.237095 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.237099 | orchestrator | 2026-01-13 00:55:09.237103 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-13 00:55:09.237106 | orchestrator | Tuesday 13 January 2026 00:54:54 +0000 (0:00:00.295) 0:11:06.326 ******* 2026-01-13 00:55:09.237110 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.237114 | orchestrator | 2026-01-13 00:55:09.237117 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-13 00:55:09.237121 | orchestrator | Tuesday 13 January 2026 00:54:55 +0000 (0:00:00.806) 0:11:07.132 ******* 2026-01-13 00:55:09.237127 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.237131 | orchestrator | 2026-01-13 00:55:09.237134 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-13 00:55:09.237138 | orchestrator | Tuesday 13 January 2026 00:54:55 +0000 (0:00:00.559) 0:11:07.692 ******* 2026-01-13 00:55:09.237142 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.237145 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.237149 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.237153 | orchestrator | 2026-01-13 00:55:09.237156 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-13 00:55:09.237160 | orchestrator | Tuesday 13 January 2026 00:54:56 +0000 (0:00:01.367) 0:11:09.060 ******* 2026-01-13 00:55:09.237164 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.237167 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.237171 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.237175 | orchestrator | 2026-01-13 00:55:09.237179 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-13 00:55:09.237182 | orchestrator | Tuesday 13 January 2026 00:54:58 +0000 (0:00:01.575) 0:11:10.636 ******* 2026-01-13 00:55:09.237186 | orchestrator | changed: [testbed-node-3] 2026-01-13 00:55:09.237190 | orchestrator | changed: [testbed-node-5] 2026-01-13 00:55:09.237193 | orchestrator | changed: [testbed-node-4] 2026-01-13 00:55:09.237197 | orchestrator | 2026-01-13 00:55:09.237201 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-13 00:55:09.237205 | orchestrator | Tuesday 13 January 2026 00:55:00 +0000 (0:00:02.274) 0:11:12.910 ******* 2026-01-13 00:55:09.237208 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.237215 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.237219 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-13 00:55:09.237222 | orchestrator | 2026-01-13 00:55:09.237226 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-13 00:55:09.237230 | orchestrator | Tuesday 13 January 2026 00:55:03 +0000 (0:00:02.951) 0:11:15.862 ******* 2026-01-13 00:55:09.237234 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.237237 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.237241 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.237245 | orchestrator | 2026-01-13 00:55:09.237248 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-13 00:55:09.237252 | orchestrator | Tuesday 13 January 2026 00:55:04 +0000 (0:00:00.524) 0:11:16.386 ******* 2026-01-13 00:55:09.237256 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:55:09.237260 | orchestrator | 2026-01-13 00:55:09.237263 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-13 00:55:09.237267 | orchestrator | Tuesday 13 January 2026 00:55:04 +0000 (0:00:00.514) 0:11:16.901 ******* 2026-01-13 00:55:09.237271 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.237274 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.237278 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.237282 | orchestrator | 2026-01-13 00:55:09.237285 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-13 00:55:09.237289 | orchestrator | Tuesday 13 January 2026 00:55:05 +0000 (0:00:00.607) 0:11:17.508 ******* 2026-01-13 00:55:09.237293 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.237297 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:55:09.237300 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:55:09.237304 | orchestrator | 2026-01-13 00:55:09.237308 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-13 00:55:09.237311 | orchestrator | Tuesday 13 January 2026 00:55:05 +0000 (0:00:00.333) 0:11:17.841 ******* 2026-01-13 00:55:09.237315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:55:09.237319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:55:09.237322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:55:09.237326 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:55:09.237330 | orchestrator | 2026-01-13 00:55:09.237333 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-13 00:55:09.237337 | orchestrator | Tuesday 13 January 2026 00:55:06 +0000 (0:00:00.591) 0:11:18.433 ******* 2026-01-13 00:55:09.237341 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:55:09.237345 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:55:09.237348 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:55:09.237352 | orchestrator | 2026-01-13 00:55:09.237356 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:55:09.237361 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-13 00:55:09.237365 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-13 00:55:09.237369 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-13 00:55:09.237373 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-13 00:55:09.237377 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-13 00:55:09.237384 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-13 00:55:09.237388 | orchestrator | 2026-01-13 00:55:09.237392 | orchestrator | 2026-01-13 00:55:09.237396 | orchestrator | 2026-01-13 00:55:09.237400 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:55:09.237403 | orchestrator | Tuesday 13 January 2026 00:55:06 +0000 (0:00:00.257) 0:11:18.690 ******* 2026-01-13 00:55:09.237407 | orchestrator | =============================================================================== 2026-01-13 00:55:09.237411 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 60.89s 2026-01-13 00:55:09.237415 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.30s 2026-01-13 00:55:09.237418 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 35.44s 2026-01-13 00:55:09.237422 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.37s 2026-01-13 00:55:09.237426 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.75s 2026-01-13 00:55:09.237429 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.23s 2026-01-13 00:55:09.237433 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.34s 2026-01-13 00:55:09.237437 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 11.07s 2026-01-13 00:55:09.237440 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.76s 2026-01-13 00:55:09.237444 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.56s 2026-01-13 00:55:09.237448 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.43s 2026-01-13 00:55:09.237451 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.29s 2026-01-13 00:55:09.237455 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 5.53s 2026-01-13 00:55:09.237459 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.43s 2026-01-13 00:55:09.237462 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 5.02s 2026-01-13 00:55:09.237466 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.21s 2026-01-13 00:55:09.237470 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.92s 2026-01-13 00:55:09.237473 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.88s 2026-01-13 00:55:09.237477 | orchestrator | ceph-mon : Generate initial monmap -------------------------------------- 3.81s 2026-01-13 00:55:09.237481 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.79s 2026-01-13 00:55:09.237484 | orchestrator | 2026-01-13 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:12.254278 | orchestrator | 2026-01-13 00:55:12 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:12.255204 | orchestrator | 2026-01-13 00:55:12 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:55:12.255264 | orchestrator | 2026-01-13 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:15.294551 | orchestrator | 2026-01-13 00:55:15 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:15.295439 | orchestrator | 2026-01-13 00:55:15 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:55:15.295984 | orchestrator | 2026-01-13 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:18.341655 | orchestrator | 2026-01-13 00:55:18 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:18.343035 | orchestrator | 2026-01-13 00:55:18 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:55:18.343090 | orchestrator | 2026-01-13 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:21.393866 | orchestrator | 2026-01-13 00:55:21 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:21.395607 | orchestrator | 2026-01-13 00:55:21 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:55:21.395669 | orchestrator | 2026-01-13 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:24.442799 | orchestrator | 2026-01-13 00:55:24 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:24.444805 | orchestrator | 2026-01-13 00:55:24 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:55:24.444878 | orchestrator | 2026-01-13 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:27.481151 | orchestrator | 2026-01-13 00:55:27 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:27.485531 | orchestrator | 2026-01-13 00:55:27 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state STARTED 2026-01-13 00:55:27.485604 | orchestrator | 2026-01-13 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:30.538311 | orchestrator | 2026-01-13 00:55:30 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:30.546454 | orchestrator | 2026-01-13 00:55:30 | INFO  | Task 3a479429-e057-43b6-a348-a20efcea0e17 is in state SUCCESS 2026-01-13 00:55:30.546541 | orchestrator | 2026-01-13 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:30.548040 | orchestrator | 2026-01-13 00:55:30.548089 | orchestrator | 2026-01-13 00:55:30.548094 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-13 00:55:30.548099 | orchestrator | 2026-01-13 00:55:30.548103 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-13 00:55:30.548107 | orchestrator | Tuesday 13 January 2026 00:52:29 +0000 (0:00:00.092) 0:00:00.092 ******* 2026-01-13 00:55:30.548111 | orchestrator | ok: [localhost] => { 2026-01-13 00:55:30.548116 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-13 00:55:30.548120 | orchestrator | } 2026-01-13 00:55:30.548124 | orchestrator | 2026-01-13 00:55:30.548128 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-13 00:55:30.548131 | orchestrator | Tuesday 13 January 2026 00:52:29 +0000 (0:00:00.051) 0:00:00.143 ******* 2026-01-13 00:55:30.548136 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-13 00:55:30.548141 | orchestrator | ...ignoring 2026-01-13 00:55:30.548145 | orchestrator | 2026-01-13 00:55:30.548149 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-13 00:55:30.548153 | orchestrator | Tuesday 13 January 2026 00:52:32 +0000 (0:00:03.026) 0:00:03.170 ******* 2026-01-13 00:55:30.548156 | orchestrator | skipping: [localhost] 2026-01-13 00:55:30.548160 | orchestrator | 2026-01-13 00:55:30.548164 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-13 00:55:30.548168 | orchestrator | Tuesday 13 January 2026 00:52:32 +0000 (0:00:00.082) 0:00:03.252 ******* 2026-01-13 00:55:30.548171 | orchestrator | ok: [localhost] 2026-01-13 00:55:30.548175 | orchestrator | 2026-01-13 00:55:30.548179 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 00:55:30.548183 | orchestrator | 2026-01-13 00:55:30.548186 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 00:55:30.548190 | orchestrator | Tuesday 13 January 2026 00:52:33 +0000 (0:00:00.166) 0:00:03.419 ******* 2026-01-13 00:55:30.548194 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:30.548209 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:30.548213 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:30.548217 | orchestrator | 2026-01-13 00:55:30.548230 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 00:55:30.548234 | orchestrator | Tuesday 13 January 2026 00:52:33 +0000 (0:00:00.292) 0:00:03.712 ******* 2026-01-13 00:55:30.548237 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-13 00:55:30.548242 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-13 00:55:30.548250 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-13 00:55:30.548254 | orchestrator | 2026-01-13 00:55:30.548258 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-13 00:55:30.548262 | orchestrator | 2026-01-13 00:55:30.548265 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-13 00:55:30.548269 | orchestrator | Tuesday 13 January 2026 00:52:33 +0000 (0:00:00.661) 0:00:04.374 ******* 2026-01-13 00:55:30.548273 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-13 00:55:30.548277 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-13 00:55:30.548280 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-13 00:55:30.548284 | orchestrator | 2026-01-13 00:55:30.548288 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-13 00:55:30.548292 | orchestrator | Tuesday 13 January 2026 00:52:34 +0000 (0:00:00.388) 0:00:04.762 ******* 2026-01-13 00:55:30.548296 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:30.548304 | orchestrator | 2026-01-13 00:55:30.548311 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-13 00:55:30.548317 | orchestrator | Tuesday 13 January 2026 00:52:35 +0000 (0:00:00.641) 0:00:05.403 ******* 2026-01-13 00:55:30.548358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-13 00:55:30.548369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-13 00:55:30.548387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-13 00:55:30.548395 | orchestrator | 2026-01-13 00:55:30.548407 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-13 00:55:30.548427 | orchestrator | Tuesday 13 January 2026 00:52:38 +0000 (0:00:03.419) 0:00:08.823 ******* 2026-01-13 00:55:30.548432 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.548436 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:30.548440 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.548443 | orchestrator | 2026-01-13 00:55:30.548447 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-13 00:55:30.548451 | orchestrator | Tuesday 13 January 2026 00:52:39 +0000 (0:00:00.687) 0:00:09.510 ******* 2026-01-13 00:55:30.548455 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.548461 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.548465 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:30.548469 | orchestrator | 2026-01-13 00:55:30.548473 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-13 00:55:30.548477 | orchestrator | Tuesday 13 January 2026 00:52:40 +0000 (0:00:01.233) 0:00:10.744 ******* 2026-01-13 00:55:30.548481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-13 00:55:30.548490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-13 00:55:30.548498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-13 00:55:30.548502 | orchestrator | 2026-01-13 00:55:30.548506 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-13 00:55:30.548510 | orchestrator | Tuesday 13 January 2026 00:52:44 +0000 (0:00:04.102) 0:00:14.846 ******* 2026-01-13 00:55:30.548513 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.548517 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.548521 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:30.548524 | orchestrator | 2026-01-13 00:55:30.548528 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-13 00:55:30.548532 | orchestrator | Tuesday 13 January 2026 00:52:45 +0000 (0:00:01.025) 0:00:15.872 ******* 2026-01-13 00:55:30.548535 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:30.548539 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:30.548543 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:30.548546 | orchestrator | 2026-01-13 00:55:30.548550 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-13 00:55:30.548554 | orchestrator | Tuesday 13 January 2026 00:52:48 +0000 (0:00:03.472) 0:00:19.345 ******* 2026-01-13 00:55:30.548558 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:30.548561 | orchestrator | 2026-01-13 00:55:30.548565 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-13 00:55:30.548593 | orchestrator | Tuesday 13 January 2026 00:52:49 +0000 (0:00:00.474) 0:00:19.819 ******* 2026-01-13 00:55:30.548605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:55:30.548621 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:30.548630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:55:30.548637 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.548650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:55:30.548663 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.548670 | orchestrator | 2026-01-13 00:55:30.548676 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-13 00:55:30.548682 | orchestrator | Tuesday 13 January 2026 00:52:52 +0000 (0:00:03.263) 0:00:23.082 ******* 2026-01-13 00:55:30.548689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:55:30.548696 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.548710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:55:30.548736 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.548744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:55:30.548750 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:30.548757 | orchestrator | 2026-01-13 00:55:30.548762 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-13 00:55:30.548769 | orchestrator | Tuesday 13 January 2026 00:52:55 +0000 (0:00:02.926) 0:00:26.009 ******* 2026-01-13 00:55:30.548776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:55:30.548786 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:30.548794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:55:30.548798 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.548804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-13 00:55:30.548811 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.548815 | orchestrator | 2026-01-13 00:55:30.548818 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-13 00:55:30.548822 | orchestrator | Tuesday 13 January 2026 00:52:58 +0000 (0:00:03.340) 0:00:29.350 ******* 2026-01-13 00:55:30.548830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-13 00:55:30.548836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-13 00:55:30.548847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-13 00:55:30.548852 | orchestrator | 2026-01-13 00:55:30.548856 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-13 00:55:30.548859 | orchestrator | Tuesday 13 January 2026 00:53:02 +0000 (0:00:03.952) 0:00:33.302 ******* 2026-01-13 00:55:30.548863 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:30.548867 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:30.548871 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:30.548874 | orchestrator | 2026-01-13 00:55:30.548878 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-13 00:55:30.548882 | orchestrator | Tuesday 13 January 2026 00:53:03 +0000 (0:00:00.794) 0:00:34.097 ******* 2026-01-13 00:55:30.548885 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:30.548889 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:30.548893 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:30.548896 | orchestrator | 2026-01-13 00:55:30.548900 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-13 00:55:30.548904 | orchestrator | Tuesday 13 January 2026 00:53:04 +0000 (0:00:00.443) 0:00:34.540 ******* 2026-01-13 00:55:30.548907 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:30.548911 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:30.548915 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:30.548918 | orchestrator | 2026-01-13 00:55:30.548922 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-13 00:55:30.548926 | orchestrator | Tuesday 13 January 2026 00:53:04 +0000 (0:00:00.284) 0:00:34.825 ******* 2026-01-13 00:55:30.548930 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-13 00:55:30.548936 | orchestrator | ...ignoring 2026-01-13 00:55:30.548940 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-13 00:55:30.548943 | orchestrator | ...ignoring 2026-01-13 00:55:30.548953 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-13 00:55:30.548957 | orchestrator | ...ignoring 2026-01-13 00:55:30.548965 | orchestrator | 2026-01-13 00:55:30.548969 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-13 00:55:30.548973 | orchestrator | Tuesday 13 January 2026 00:53:15 +0000 (0:00:10.939) 0:00:45.765 ******* 2026-01-13 00:55:30.548977 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:30.548980 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:30.548984 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:30.548988 | orchestrator | 2026-01-13 00:55:30.548992 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-13 00:55:30.548997 | orchestrator | Tuesday 13 January 2026 00:53:15 +0000 (0:00:00.406) 0:00:46.171 ******* 2026-01-13 00:55:30.549001 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:30.549005 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.549008 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.549012 | orchestrator | 2026-01-13 00:55:30.549016 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-13 00:55:30.549019 | orchestrator | Tuesday 13 January 2026 00:53:16 +0000 (0:00:00.641) 0:00:46.812 ******* 2026-01-13 00:55:30.549023 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:30.549027 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.549030 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.549034 | orchestrator | 2026-01-13 00:55:30.549038 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-13 00:55:30.549041 | orchestrator | Tuesday 13 January 2026 00:53:16 +0000 (0:00:00.422) 0:00:47.235 ******* 2026-01-13 00:55:30.549045 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:30.549049 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.549052 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.549056 | orchestrator | 2026-01-13 00:55:30.549060 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-13 00:55:30.549064 | orchestrator | Tuesday 13 January 2026 00:53:17 +0000 (0:00:00.407) 0:00:47.642 ******* 2026-01-13 00:55:30.549067 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:30.549071 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:30.549075 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:30.549078 | orchestrator | 2026-01-13 00:55:30.549082 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-13 00:55:30.549086 | orchestrator | Tuesday 13 January 2026 00:53:17 +0000 (0:00:00.396) 0:00:48.038 ******* 2026-01-13 00:55:30.549092 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:30.549096 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.549099 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.549103 | orchestrator | 2026-01-13 00:55:30.549107 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-13 00:55:30.549110 | orchestrator | Tuesday 13 January 2026 00:53:18 +0000 (0:00:00.642) 0:00:48.680 ******* 2026-01-13 00:55:30.549114 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.549118 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.549121 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-13 00:55:30.549125 | orchestrator | 2026-01-13 00:55:30.549129 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-13 00:55:30.549133 | orchestrator | Tuesday 13 January 2026 00:53:18 +0000 (0:00:00.410) 0:00:49.091 ******* 2026-01-13 00:55:30.549136 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:30.549140 | orchestrator | 2026-01-13 00:55:30.549144 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-13 00:55:30.549150 | orchestrator | Tuesday 13 January 2026 00:53:29 +0000 (0:00:10.549) 0:00:59.640 ******* 2026-01-13 00:55:30.549154 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:30.549157 | orchestrator | 2026-01-13 00:55:30.549161 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-13 00:55:30.549165 | orchestrator | Tuesday 13 January 2026 00:53:29 +0000 (0:00:00.130) 0:00:59.771 ******* 2026-01-13 00:55:30.549168 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:30.549172 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.549176 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.549179 | orchestrator | 2026-01-13 00:55:30.549183 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-13 00:55:30.549187 | orchestrator | Tuesday 13 January 2026 00:53:30 +0000 (0:00:00.989) 0:01:00.760 ******* 2026-01-13 00:55:30.549190 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:30.549194 | orchestrator | 2026-01-13 00:55:30.549198 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-13 00:55:30.549201 | orchestrator | Tuesday 13 January 2026 00:53:38 +0000 (0:00:07.915) 0:01:08.676 ******* 2026-01-13 00:55:30.549205 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:30.549209 | orchestrator | 2026-01-13 00:55:30.549213 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-13 00:55:30.549216 | orchestrator | Tuesday 13 January 2026 00:53:39 +0000 (0:00:01.577) 0:01:10.253 ******* 2026-01-13 00:55:30.549220 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:30.549224 | orchestrator | 2026-01-13 00:55:30.549227 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-13 00:55:30.549231 | orchestrator | Tuesday 13 January 2026 00:53:42 +0000 (0:00:02.502) 0:01:12.756 ******* 2026-01-13 00:55:30.549235 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:30.549238 | orchestrator | 2026-01-13 00:55:30.549242 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-13 00:55:30.549246 | orchestrator | Tuesday 13 January 2026 00:53:42 +0000 (0:00:00.117) 0:01:12.874 ******* 2026-01-13 00:55:30.549249 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:30.549253 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.549257 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.549260 | orchestrator | 2026-01-13 00:55:30.549264 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-13 00:55:30.549268 | orchestrator | Tuesday 13 January 2026 00:53:42 +0000 (0:00:00.351) 0:01:13.225 ******* 2026-01-13 00:55:30.549271 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:30.549275 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-13 00:55:30.549279 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:30.549283 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:30.549286 | orchestrator | 2026-01-13 00:55:30.549290 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-13 00:55:30.549294 | orchestrator | skipping: no hosts matched 2026-01-13 00:55:30.549297 | orchestrator | 2026-01-13 00:55:30.549301 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-13 00:55:30.549305 | orchestrator | 2026-01-13 00:55:30.549308 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-13 00:55:30.549312 | orchestrator | Tuesday 13 January 2026 00:53:43 +0000 (0:00:00.599) 0:01:13.825 ******* 2026-01-13 00:55:30.549316 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:55:30.549319 | orchestrator | 2026-01-13 00:55:30.549325 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-13 00:55:30.549329 | orchestrator | Tuesday 13 January 2026 00:54:01 +0000 (0:00:18.426) 0:01:32.252 ******* 2026-01-13 00:55:30.549333 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:30.549337 | orchestrator | 2026-01-13 00:55:30.549340 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-13 00:55:30.549347 | orchestrator | Tuesday 13 January 2026 00:54:17 +0000 (0:00:15.633) 0:01:47.885 ******* 2026-01-13 00:55:30.549357 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:30.549364 | orchestrator | 2026-01-13 00:55:30.549370 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-13 00:55:30.549376 | orchestrator | 2026-01-13 00:55:30.549382 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-13 00:55:30.549388 | orchestrator | Tuesday 13 January 2026 00:54:19 +0000 (0:00:02.227) 0:01:50.113 ******* 2026-01-13 00:55:30.549396 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:55:30.549403 | orchestrator | 2026-01-13 00:55:30.549410 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-13 00:55:30.549417 | orchestrator | Tuesday 13 January 2026 00:54:37 +0000 (0:00:17.340) 0:02:07.453 ******* 2026-01-13 00:55:30.549424 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:30.549428 | orchestrator | 2026-01-13 00:55:30.549432 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-13 00:55:30.549435 | orchestrator | Tuesday 13 January 2026 00:54:52 +0000 (0:00:15.602) 0:02:23.056 ******* 2026-01-13 00:55:30.549439 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:30.549443 | orchestrator | 2026-01-13 00:55:30.549448 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-13 00:55:30.549455 | orchestrator | 2026-01-13 00:55:30.549464 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-13 00:55:30.549470 | orchestrator | Tuesday 13 January 2026 00:54:55 +0000 (0:00:02.492) 0:02:25.548 ******* 2026-01-13 00:55:30.549477 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:30.549483 | orchestrator | 2026-01-13 00:55:30.549490 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-13 00:55:30.549496 | orchestrator | Tuesday 13 January 2026 00:55:12 +0000 (0:00:17.259) 0:02:42.807 ******* 2026-01-13 00:55:30.549503 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:30.549509 | orchestrator | 2026-01-13 00:55:30.549513 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-13 00:55:30.549517 | orchestrator | Tuesday 13 January 2026 00:55:13 +0000 (0:00:00.653) 0:02:43.461 ******* 2026-01-13 00:55:30.549520 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:30.549524 | orchestrator | 2026-01-13 00:55:30.549527 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-13 00:55:30.549531 | orchestrator | 2026-01-13 00:55:30.549537 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-13 00:55:30.549543 | orchestrator | Tuesday 13 January 2026 00:55:15 +0000 (0:00:02.666) 0:02:46.128 ******* 2026-01-13 00:55:30.549549 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:55:30.549555 | orchestrator | 2026-01-13 00:55:30.549562 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-13 00:55:30.549569 | orchestrator | Tuesday 13 January 2026 00:55:16 +0000 (0:00:00.555) 0:02:46.684 ******* 2026-01-13 00:55:30.549575 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.549581 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.549588 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:30.549594 | orchestrator | 2026-01-13 00:55:30.549600 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-13 00:55:30.549607 | orchestrator | Tuesday 13 January 2026 00:55:18 +0000 (0:00:02.256) 0:02:48.940 ******* 2026-01-13 00:55:30.549612 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.549618 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.549624 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:30.549630 | orchestrator | 2026-01-13 00:55:30.549636 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-13 00:55:30.549642 | orchestrator | Tuesday 13 January 2026 00:55:20 +0000 (0:00:02.080) 0:02:51.021 ******* 2026-01-13 00:55:30.549647 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.549653 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.549663 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:30.549669 | orchestrator | 2026-01-13 00:55:30.549675 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-13 00:55:30.549681 | orchestrator | Tuesday 13 January 2026 00:55:23 +0000 (0:00:02.638) 0:02:53.660 ******* 2026-01-13 00:55:30.549688 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.549694 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.549701 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:55:30.549707 | orchestrator | 2026-01-13 00:55:30.549713 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-13 00:55:30.549729 | orchestrator | Tuesday 13 January 2026 00:55:25 +0000 (0:00:02.407) 0:02:56.067 ******* 2026-01-13 00:55:30.549733 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:55:30.549737 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:55:30.549741 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:55:30.549745 | orchestrator | 2026-01-13 00:55:30.549748 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-13 00:55:30.549752 | orchestrator | Tuesday 13 January 2026 00:55:29 +0000 (0:00:03.538) 0:02:59.605 ******* 2026-01-13 00:55:30.549756 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:55:30.549759 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:55:30.549763 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:55:30.549767 | orchestrator | 2026-01-13 00:55:30.549771 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:55:30.549774 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-13 00:55:30.549781 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-13 00:55:30.549786 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-13 00:55:30.549790 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-13 00:55:30.549794 | orchestrator | 2026-01-13 00:55:30.549797 | orchestrator | 2026-01-13 00:55:30.549801 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:55:30.549805 | orchestrator | Tuesday 13 January 2026 00:55:29 +0000 (0:00:00.232) 0:02:59.837 ******* 2026-01-13 00:55:30.549808 | orchestrator | =============================================================================== 2026-01-13 00:55:30.549812 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.77s 2026-01-13 00:55:30.549816 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.24s 2026-01-13 00:55:30.549819 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.26s 2026-01-13 00:55:30.549823 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.94s 2026-01-13 00:55:30.549827 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.55s 2026-01-13 00:55:30.549831 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.92s 2026-01-13 00:55:30.549838 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.72s 2026-01-13 00:55:30.549842 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.10s 2026-01-13 00:55:30.549846 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.95s 2026-01-13 00:55:30.549850 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.54s 2026-01-13 00:55:30.549853 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.47s 2026-01-13 00:55:30.549857 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.42s 2026-01-13 00:55:30.549861 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.34s 2026-01-13 00:55:30.549867 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.26s 2026-01-13 00:55:30.549871 | orchestrator | Check MariaDB service --------------------------------------------------- 3.03s 2026-01-13 00:55:30.549875 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.93s 2026-01-13 00:55:30.549878 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.67s 2026-01-13 00:55:30.549882 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.64s 2026-01-13 00:55:30.549886 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.50s 2026-01-13 00:55:30.549890 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.41s 2026-01-13 00:55:33.582383 | orchestrator | 2026-01-13 00:55:33 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:33.582874 | orchestrator | 2026-01-13 00:55:33 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:55:33.584106 | orchestrator | 2026-01-13 00:55:33 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:55:33.584130 | orchestrator | 2026-01-13 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:36.607823 | orchestrator | 2026-01-13 00:55:36 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:36.608014 | orchestrator | 2026-01-13 00:55:36 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:55:36.609391 | orchestrator | 2026-01-13 00:55:36 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:55:36.609441 | orchestrator | 2026-01-13 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:39.649971 | orchestrator | 2026-01-13 00:55:39 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:39.650079 | orchestrator | 2026-01-13 00:55:39 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:55:39.651623 | orchestrator | 2026-01-13 00:55:39 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:55:39.651682 | orchestrator | 2026-01-13 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:42.683739 | orchestrator | 2026-01-13 00:55:42 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:42.684898 | orchestrator | 2026-01-13 00:55:42 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:55:42.686425 | orchestrator | 2026-01-13 00:55:42 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:55:42.686472 | orchestrator | 2026-01-13 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:45.724979 | orchestrator | 2026-01-13 00:55:45 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:45.728869 | orchestrator | 2026-01-13 00:55:45 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:55:45.731345 | orchestrator | 2026-01-13 00:55:45 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:55:45.732845 | orchestrator | 2026-01-13 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:48.775646 | orchestrator | 2026-01-13 00:55:48 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:48.776173 | orchestrator | 2026-01-13 00:55:48 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:55:48.776968 | orchestrator | 2026-01-13 00:55:48 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:55:48.777020 | orchestrator | 2026-01-13 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:51.818337 | orchestrator | 2026-01-13 00:55:51 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:51.820172 | orchestrator | 2026-01-13 00:55:51 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:55:51.822727 | orchestrator | 2026-01-13 00:55:51 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:55:51.822793 | orchestrator | 2026-01-13 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:54.873439 | orchestrator | 2026-01-13 00:55:54 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:54.877096 | orchestrator | 2026-01-13 00:55:54 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:55:54.877750 | orchestrator | 2026-01-13 00:55:54 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:55:54.878203 | orchestrator | 2026-01-13 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:55:57.918309 | orchestrator | 2026-01-13 00:55:57 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:55:57.918400 | orchestrator | 2026-01-13 00:55:57 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:55:57.919054 | orchestrator | 2026-01-13 00:55:57 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:55:57.919117 | orchestrator | 2026-01-13 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:00.955698 | orchestrator | 2026-01-13 00:56:00 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:00.958220 | orchestrator | 2026-01-13 00:56:00 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:00.961129 | orchestrator | 2026-01-13 00:56:00 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:00.961175 | orchestrator | 2026-01-13 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:04.013457 | orchestrator | 2026-01-13 00:56:04 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:04.016018 | orchestrator | 2026-01-13 00:56:04 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:04.018600 | orchestrator | 2026-01-13 00:56:04 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:04.018725 | orchestrator | 2026-01-13 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:07.068000 | orchestrator | 2026-01-13 00:56:07 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:07.069378 | orchestrator | 2026-01-13 00:56:07 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:07.071176 | orchestrator | 2026-01-13 00:56:07 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:07.071286 | orchestrator | 2026-01-13 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:10.127176 | orchestrator | 2026-01-13 00:56:10 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:10.127797 | orchestrator | 2026-01-13 00:56:10 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:10.129871 | orchestrator | 2026-01-13 00:56:10 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:10.129929 | orchestrator | 2026-01-13 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:13.179246 | orchestrator | 2026-01-13 00:56:13 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:13.180300 | orchestrator | 2026-01-13 00:56:13 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:13.181983 | orchestrator | 2026-01-13 00:56:13 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:13.182074 | orchestrator | 2026-01-13 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:16.224276 | orchestrator | 2026-01-13 00:56:16 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:16.227076 | orchestrator | 2026-01-13 00:56:16 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:16.230122 | orchestrator | 2026-01-13 00:56:16 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:16.230291 | orchestrator | 2026-01-13 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:19.281600 | orchestrator | 2026-01-13 00:56:19 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:19.283663 | orchestrator | 2026-01-13 00:56:19 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:19.286258 | orchestrator | 2026-01-13 00:56:19 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:19.286340 | orchestrator | 2026-01-13 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:22.336320 | orchestrator | 2026-01-13 00:56:22 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:22.338849 | orchestrator | 2026-01-13 00:56:22 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:22.340764 | orchestrator | 2026-01-13 00:56:22 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:22.340809 | orchestrator | 2026-01-13 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:25.390999 | orchestrator | 2026-01-13 00:56:25 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:25.393998 | orchestrator | 2026-01-13 00:56:25 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:25.395955 | orchestrator | 2026-01-13 00:56:25 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:25.396192 | orchestrator | 2026-01-13 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:28.448812 | orchestrator | 2026-01-13 00:56:28 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:28.450920 | orchestrator | 2026-01-13 00:56:28 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:28.452607 | orchestrator | 2026-01-13 00:56:28 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:28.452737 | orchestrator | 2026-01-13 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:31.504263 | orchestrator | 2026-01-13 00:56:31 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:31.504947 | orchestrator | 2026-01-13 00:56:31 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:31.506650 | orchestrator | 2026-01-13 00:56:31 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:31.507051 | orchestrator | 2026-01-13 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:34.546628 | orchestrator | 2026-01-13 00:56:34 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:34.547862 | orchestrator | 2026-01-13 00:56:34 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:34.548972 | orchestrator | 2026-01-13 00:56:34 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:34.549022 | orchestrator | 2026-01-13 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:37.599121 | orchestrator | 2026-01-13 00:56:37 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:37.602385 | orchestrator | 2026-01-13 00:56:37 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:37.605142 | orchestrator | 2026-01-13 00:56:37 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:37.605201 | orchestrator | 2026-01-13 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:40.654477 | orchestrator | 2026-01-13 00:56:40 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:40.656440 | orchestrator | 2026-01-13 00:56:40 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:40.658198 | orchestrator | 2026-01-13 00:56:40 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:40.658264 | orchestrator | 2026-01-13 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:43.706919 | orchestrator | 2026-01-13 00:56:43 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:43.709218 | orchestrator | 2026-01-13 00:56:43 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:43.711506 | orchestrator | 2026-01-13 00:56:43 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:43.711555 | orchestrator | 2026-01-13 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:46.757522 | orchestrator | 2026-01-13 00:56:46 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:46.759442 | orchestrator | 2026-01-13 00:56:46 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:46.761233 | orchestrator | 2026-01-13 00:56:46 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:46.761265 | orchestrator | 2026-01-13 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:49.816731 | orchestrator | 2026-01-13 00:56:49 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:49.819051 | orchestrator | 2026-01-13 00:56:49 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:49.821323 | orchestrator | 2026-01-13 00:56:49 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:49.821392 | orchestrator | 2026-01-13 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:52.868615 | orchestrator | 2026-01-13 00:56:52 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:52.870881 | orchestrator | 2026-01-13 00:56:52 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:52.872417 | orchestrator | 2026-01-13 00:56:52 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:52.872468 | orchestrator | 2026-01-13 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:55.917248 | orchestrator | 2026-01-13 00:56:55 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:55.919150 | orchestrator | 2026-01-13 00:56:55 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:55.920835 | orchestrator | 2026-01-13 00:56:55 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:55.921535 | orchestrator | 2026-01-13 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:56:58.966631 | orchestrator | 2026-01-13 00:56:58 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:56:58.968972 | orchestrator | 2026-01-13 00:56:58 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:56:58.970416 | orchestrator | 2026-01-13 00:56:58 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:56:58.970690 | orchestrator | 2026-01-13 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:02.015835 | orchestrator | 2026-01-13 00:57:02 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:57:02.018539 | orchestrator | 2026-01-13 00:57:02 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:02.020716 | orchestrator | 2026-01-13 00:57:02 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:57:02.020740 | orchestrator | 2026-01-13 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:05.098486 | orchestrator | 2026-01-13 00:57:05 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:57:05.101012 | orchestrator | 2026-01-13 00:57:05 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:05.103509 | orchestrator | 2026-01-13 00:57:05 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:57:05.103608 | orchestrator | 2026-01-13 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:08.148933 | orchestrator | 2026-01-13 00:57:08 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:57:08.150280 | orchestrator | 2026-01-13 00:57:08 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:08.152515 | orchestrator | 2026-01-13 00:57:08 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:57:08.152602 | orchestrator | 2026-01-13 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:11.206888 | orchestrator | 2026-01-13 00:57:11 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:57:11.208485 | orchestrator | 2026-01-13 00:57:11 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:11.210798 | orchestrator | 2026-01-13 00:57:11 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:57:11.210844 | orchestrator | 2026-01-13 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:14.259111 | orchestrator | 2026-01-13 00:57:14 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:57:14.260278 | orchestrator | 2026-01-13 00:57:14 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:14.262839 | orchestrator | 2026-01-13 00:57:14 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state STARTED 2026-01-13 00:57:14.262971 | orchestrator | 2026-01-13 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:17.318642 | orchestrator | 2026-01-13 00:57:17 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state STARTED 2026-01-13 00:57:17.320954 | orchestrator | 2026-01-13 00:57:17 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:17.324677 | orchestrator | 2026-01-13 00:57:17 | INFO  | Task 6d840277-c60b-4757-be1d-e554d76c0c33 is in state SUCCESS 2026-01-13 00:57:17.325592 | orchestrator | 2026-01-13 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:17.326814 | orchestrator | 2026-01-13 00:57:17.326842 | orchestrator | 2026-01-13 00:57:17.326847 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 00:57:17.326851 | orchestrator | 2026-01-13 00:57:17.326855 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 00:57:17.326859 | orchestrator | Tuesday 13 January 2026 00:55:34 +0000 (0:00:00.315) 0:00:00.315 ******* 2026-01-13 00:57:17.326863 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:57:17.326867 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:57:17.326871 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:57:17.326875 | orchestrator | 2026-01-13 00:57:17.326879 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 00:57:17.326883 | orchestrator | Tuesday 13 January 2026 00:55:34 +0000 (0:00:00.262) 0:00:00.578 ******* 2026-01-13 00:57:17.326887 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-13 00:57:17.326891 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-13 00:57:17.326894 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-13 00:57:17.326898 | orchestrator | 2026-01-13 00:57:17.326901 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-13 00:57:17.326905 | orchestrator | 2026-01-13 00:57:17.326909 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-13 00:57:17.326913 | orchestrator | Tuesday 13 January 2026 00:55:35 +0000 (0:00:00.376) 0:00:00.954 ******* 2026-01-13 00:57:17.326916 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:57:17.326920 | orchestrator | 2026-01-13 00:57:17.326924 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-13 00:57:17.326928 | orchestrator | Tuesday 13 January 2026 00:55:35 +0000 (0:00:00.467) 0:00:01.422 ******* 2026-01-13 00:57:17.326943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-13 00:57:17.326968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-13 00:57:17.326976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-13 00:57:17.326983 | orchestrator | 2026-01-13 00:57:17.326987 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-13 00:57:17.326991 | orchestrator | Tuesday 13 January 2026 00:55:36 +0000 (0:00:01.125) 0:00:02.547 ******* 2026-01-13 00:57:17.326994 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:57:17.326998 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:57:17.327002 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:57:17.327006 | orchestrator | 2026-01-13 00:57:17.327009 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-13 00:57:17.327013 | orchestrator | Tuesday 13 January 2026 00:55:37 +0000 (0:00:00.371) 0:00:02.918 ******* 2026-01-13 00:57:17.327017 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-13 00:57:17.327024 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-13 00:57:17.327031 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-13 00:57:17.327037 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-13 00:57:17.327044 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-13 00:57:17.327051 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-13 00:57:17.327057 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-13 00:57:17.327063 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-13 00:57:17.327070 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-13 00:57:17.327076 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-13 00:57:17.327082 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-13 00:57:17.327089 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-13 00:57:17.327096 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-13 00:57:17.327102 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-13 00:57:17.327109 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-13 00:57:17.327116 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-13 00:57:17.327122 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-13 00:57:17.327129 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-13 00:57:17.327136 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-13 00:57:17.327143 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-13 00:57:17.327147 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-13 00:57:17.327150 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-13 00:57:17.327154 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-13 00:57:17.327158 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-13 00:57:17.327162 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-13 00:57:17.327172 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-13 00:57:17.327176 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-13 00:57:17.327180 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-13 00:57:17.327184 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-13 00:57:17.327188 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-13 00:57:17.327191 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-13 00:57:17.327195 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-13 00:57:17.327199 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-13 00:57:17.327203 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-13 00:57:17.327207 | orchestrator | 2026-01-13 00:57:17.327211 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-13 00:57:17.327214 | orchestrator | Tuesday 13 January 2026 00:55:37 +0000 (0:00:00.613) 0:00:03.531 ******* 2026-01-13 00:57:17.327218 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:57:17.327222 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:57:17.327225 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:57:17.327229 | orchestrator | 2026-01-13 00:57:17.327233 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-13 00:57:17.327237 | orchestrator | Tuesday 13 January 2026 00:55:37 +0000 (0:00:00.263) 0:00:03.795 ******* 2026-01-13 00:57:17.327240 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327244 | orchestrator | 2026-01-13 00:57:17.327251 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-13 00:57:17.327255 | orchestrator | Tuesday 13 January 2026 00:55:38 +0000 (0:00:00.117) 0:00:03.913 ******* 2026-01-13 00:57:17.327259 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327263 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.327266 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.327270 | orchestrator | 2026-01-13 00:57:17.327274 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-13 00:57:17.327277 | orchestrator | Tuesday 13 January 2026 00:55:38 +0000 (0:00:00.425) 0:00:04.339 ******* 2026-01-13 00:57:17.327281 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:57:17.327285 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:57:17.327289 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:57:17.327292 | orchestrator | 2026-01-13 00:57:17.327296 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-13 00:57:17.327300 | orchestrator | Tuesday 13 January 2026 00:55:38 +0000 (0:00:00.256) 0:00:04.596 ******* 2026-01-13 00:57:17.327303 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327307 | orchestrator | 2026-01-13 00:57:17.327311 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-13 00:57:17.327314 | orchestrator | Tuesday 13 January 2026 00:55:38 +0000 (0:00:00.120) 0:00:04.717 ******* 2026-01-13 00:57:17.327321 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327324 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.327328 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.327332 | orchestrator | 2026-01-13 00:57:17.327335 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-13 00:57:17.327339 | orchestrator | Tuesday 13 January 2026 00:55:39 +0000 (0:00:00.272) 0:00:04.989 ******* 2026-01-13 00:57:17.327343 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:57:17.327347 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:57:17.327350 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:57:17.327354 | orchestrator | 2026-01-13 00:57:17.327358 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-13 00:57:17.327361 | orchestrator | Tuesday 13 January 2026 00:55:39 +0000 (0:00:00.298) 0:00:05.287 ******* 2026-01-13 00:57:17.327480 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327486 | orchestrator | 2026-01-13 00:57:17.327490 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-13 00:57:17.327494 | orchestrator | Tuesday 13 January 2026 00:55:39 +0000 (0:00:00.300) 0:00:05.588 ******* 2026-01-13 00:57:17.327499 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327503 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.327508 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.327515 | orchestrator | 2026-01-13 00:57:17.327578 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-13 00:57:17.327594 | orchestrator | Tuesday 13 January 2026 00:55:39 +0000 (0:00:00.259) 0:00:05.847 ******* 2026-01-13 00:57:17.327600 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:57:17.327607 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:57:17.327614 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:57:17.327621 | orchestrator | 2026-01-13 00:57:17.327627 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-13 00:57:17.327633 | orchestrator | Tuesday 13 January 2026 00:55:40 +0000 (0:00:00.284) 0:00:06.132 ******* 2026-01-13 00:57:17.327638 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327644 | orchestrator | 2026-01-13 00:57:17.327651 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-13 00:57:17.327662 | orchestrator | Tuesday 13 January 2026 00:55:40 +0000 (0:00:00.106) 0:00:06.238 ******* 2026-01-13 00:57:17.327668 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327675 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.327681 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.327687 | orchestrator | 2026-01-13 00:57:17.327694 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-13 00:57:17.327701 | orchestrator | Tuesday 13 January 2026 00:55:40 +0000 (0:00:00.248) 0:00:06.487 ******* 2026-01-13 00:57:17.327707 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:57:17.327713 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:57:17.327720 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:57:17.327726 | orchestrator | 2026-01-13 00:57:17.327731 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-13 00:57:17.327735 | orchestrator | Tuesday 13 January 2026 00:55:41 +0000 (0:00:00.405) 0:00:06.892 ******* 2026-01-13 00:57:17.327738 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327742 | orchestrator | 2026-01-13 00:57:17.327746 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-13 00:57:17.327750 | orchestrator | Tuesday 13 January 2026 00:55:41 +0000 (0:00:00.130) 0:00:07.022 ******* 2026-01-13 00:57:17.327753 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327757 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.327761 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.327764 | orchestrator | 2026-01-13 00:57:17.327768 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-13 00:57:17.327772 | orchestrator | Tuesday 13 January 2026 00:55:41 +0000 (0:00:00.252) 0:00:07.275 ******* 2026-01-13 00:57:17.327775 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:57:17.327784 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:57:17.327788 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:57:17.327791 | orchestrator | 2026-01-13 00:57:17.327795 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-13 00:57:17.327799 | orchestrator | Tuesday 13 January 2026 00:55:41 +0000 (0:00:00.253) 0:00:07.528 ******* 2026-01-13 00:57:17.327802 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327806 | orchestrator | 2026-01-13 00:57:17.327810 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-13 00:57:17.327813 | orchestrator | Tuesday 13 January 2026 00:55:41 +0000 (0:00:00.107) 0:00:07.636 ******* 2026-01-13 00:57:17.327817 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327821 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.327825 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.327828 | orchestrator | 2026-01-13 00:57:17.327832 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-13 00:57:17.327840 | orchestrator | Tuesday 13 January 2026 00:55:41 +0000 (0:00:00.234) 0:00:07.870 ******* 2026-01-13 00:57:17.327844 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:57:17.327848 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:57:17.327851 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:57:17.327855 | orchestrator | 2026-01-13 00:57:17.327859 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-13 00:57:17.327862 | orchestrator | Tuesday 13 January 2026 00:55:42 +0000 (0:00:00.442) 0:00:08.312 ******* 2026-01-13 00:57:17.327866 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327870 | orchestrator | 2026-01-13 00:57:17.327874 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-13 00:57:17.327877 | orchestrator | Tuesday 13 January 2026 00:55:42 +0000 (0:00:00.107) 0:00:08.419 ******* 2026-01-13 00:57:17.327881 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327885 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.327888 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.327892 | orchestrator | 2026-01-13 00:57:17.327896 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-13 00:57:17.327899 | orchestrator | Tuesday 13 January 2026 00:55:42 +0000 (0:00:00.240) 0:00:08.660 ******* 2026-01-13 00:57:17.327903 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:57:17.327907 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:57:17.327910 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:57:17.327914 | orchestrator | 2026-01-13 00:57:17.327918 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-13 00:57:17.327921 | orchestrator | Tuesday 13 January 2026 00:55:43 +0000 (0:00:00.258) 0:00:08.919 ******* 2026-01-13 00:57:17.327925 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327929 | orchestrator | 2026-01-13 00:57:17.327932 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-13 00:57:17.327936 | orchestrator | Tuesday 13 January 2026 00:55:43 +0000 (0:00:00.109) 0:00:09.028 ******* 2026-01-13 00:57:17.327940 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327943 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.327947 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.327951 | orchestrator | 2026-01-13 00:57:17.327954 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-13 00:57:17.327958 | orchestrator | Tuesday 13 January 2026 00:55:43 +0000 (0:00:00.370) 0:00:09.399 ******* 2026-01-13 00:57:17.327962 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:57:17.327966 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:57:17.327969 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:57:17.327973 | orchestrator | 2026-01-13 00:57:17.327977 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-13 00:57:17.327980 | orchestrator | Tuesday 13 January 2026 00:55:43 +0000 (0:00:00.277) 0:00:09.676 ******* 2026-01-13 00:57:17.327984 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.327988 | orchestrator | 2026-01-13 00:57:17.327994 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-13 00:57:17.327997 | orchestrator | Tuesday 13 January 2026 00:55:43 +0000 (0:00:00.104) 0:00:09.781 ******* 2026-01-13 00:57:17.328001 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.328005 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.328008 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.328012 | orchestrator | 2026-01-13 00:57:17.328016 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-13 00:57:17.328019 | orchestrator | Tuesday 13 January 2026 00:55:44 +0000 (0:00:00.239) 0:00:10.020 ******* 2026-01-13 00:57:17.328023 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:57:17.328027 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:57:17.328033 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:57:17.328036 | orchestrator | 2026-01-13 00:57:17.328040 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-13 00:57:17.328044 | orchestrator | Tuesday 13 January 2026 00:55:44 +0000 (0:00:00.286) 0:00:10.307 ******* 2026-01-13 00:57:17.328048 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.328052 | orchestrator | 2026-01-13 00:57:17.328055 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-13 00:57:17.328059 | orchestrator | Tuesday 13 January 2026 00:55:44 +0000 (0:00:00.128) 0:00:10.435 ******* 2026-01-13 00:57:17.328063 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.328066 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.328070 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.328074 | orchestrator | 2026-01-13 00:57:17.328077 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-13 00:57:17.328081 | orchestrator | Tuesday 13 January 2026 00:55:45 +0000 (0:00:00.520) 0:00:10.956 ******* 2026-01-13 00:57:17.328085 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:57:17.328088 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:57:17.328092 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:57:17.328095 | orchestrator | 2026-01-13 00:57:17.328099 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-13 00:57:17.328103 | orchestrator | Tuesday 13 January 2026 00:55:46 +0000 (0:00:01.692) 0:00:12.648 ******* 2026-01-13 00:57:17.328107 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-13 00:57:17.328110 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-13 00:57:17.328114 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-13 00:57:17.328118 | orchestrator | 2026-01-13 00:57:17.328121 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-13 00:57:17.328125 | orchestrator | Tuesday 13 January 2026 00:55:48 +0000 (0:00:01.716) 0:00:14.365 ******* 2026-01-13 00:57:17.328129 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-13 00:57:17.328133 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-13 00:57:17.328137 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-13 00:57:17.328141 | orchestrator | 2026-01-13 00:57:17.328144 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-13 00:57:17.328151 | orchestrator | Tuesday 13 January 2026 00:55:50 +0000 (0:00:02.318) 0:00:16.683 ******* 2026-01-13 00:57:17.328155 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-13 00:57:17.328158 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-13 00:57:17.328162 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-13 00:57:17.328166 | orchestrator | 2026-01-13 00:57:17.328169 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-13 00:57:17.328175 | orchestrator | Tuesday 13 January 2026 00:55:52 +0000 (0:00:02.063) 0:00:18.747 ******* 2026-01-13 00:57:17.328179 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.328183 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.328186 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.328190 | orchestrator | 2026-01-13 00:57:17.328194 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-13 00:57:17.328197 | orchestrator | Tuesday 13 January 2026 00:55:53 +0000 (0:00:00.298) 0:00:19.045 ******* 2026-01-13 00:57:17.328201 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.328205 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.328208 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.328212 | orchestrator | 2026-01-13 00:57:17.328216 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-13 00:57:17.328220 | orchestrator | Tuesday 13 January 2026 00:55:53 +0000 (0:00:00.310) 0:00:19.356 ******* 2026-01-13 00:57:17.328223 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:57:17.328227 | orchestrator | 2026-01-13 00:57:17.328231 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-13 00:57:17.328234 | orchestrator | Tuesday 13 January 2026 00:55:54 +0000 (0:00:00.844) 0:00:20.201 ******* 2026-01-13 00:57:17.328242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-13 00:57:17.328251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-13 00:57:17.328264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-13 00:57:17.328269 | orchestrator | 2026-01-13 00:57:17.328273 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-13 00:57:17.328276 | orchestrator | Tuesday 13 January 2026 00:55:55 +0000 (0:00:01.462) 0:00:21.664 ******* 2026-01-13 00:57:17.328286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-13 00:57:17.328291 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.328300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-13 00:57:17.328307 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.328311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-13 00:57:17.328317 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.328321 | orchestrator | 2026-01-13 00:57:17.328325 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-13 00:57:17.328329 | orchestrator | Tuesday 13 January 2026 00:55:56 +0000 (0:00:00.629) 0:00:22.293 ******* 2026-01-13 00:57:17.328336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-13 00:57:17.328342 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.328348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-13 00:57:17.328353 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.328359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-13 00:57:17.328366 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.328369 | orchestrator | 2026-01-13 00:57:17.328373 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-13 00:57:17.328377 | orchestrator | Tuesday 13 January 2026 00:55:57 +0000 (0:00:00.950) 0:00:23.243 ******* 2026-01-13 00:57:17.328383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-13 00:57:17.328390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-13 00:57:17.328399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-13 00:57:17.328406 | orchestrator | 2026-01-13 00:57:17.328410 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-13 00:57:17.328414 | orchestrator | Tuesday 13 January 2026 00:55:59 +0000 (0:00:01.652) 0:00:24.896 ******* 2026-01-13 00:57:17.328418 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:57:17.328421 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:57:17.328425 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:57:17.328429 | orchestrator | 2026-01-13 00:57:17.328432 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-13 00:57:17.328436 | orchestrator | Tuesday 13 January 2026 00:55:59 +0000 (0:00:00.302) 0:00:25.198 ******* 2026-01-13 00:57:17.328440 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:57:17.328444 | orchestrator | 2026-01-13 00:57:17.328447 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-13 00:57:17.328453 | orchestrator | Tuesday 13 January 2026 00:55:59 +0000 (0:00:00.516) 0:00:25.715 ******* 2026-01-13 00:57:17.328457 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:57:17.328461 | orchestrator | 2026-01-13 00:57:17.328464 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-13 00:57:17.328468 | orchestrator | Tuesday 13 January 2026 00:56:02 +0000 (0:00:02.774) 0:00:28.489 ******* 2026-01-13 00:57:17.328472 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:57:17.328475 | orchestrator | 2026-01-13 00:57:17.328479 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-13 00:57:17.328483 | orchestrator | Tuesday 13 January 2026 00:56:05 +0000 (0:00:03.164) 0:00:31.654 ******* 2026-01-13 00:57:17.328486 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:57:17.328490 | orchestrator | 2026-01-13 00:57:17.328494 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-13 00:57:17.328497 | orchestrator | Tuesday 13 January 2026 00:56:22 +0000 (0:00:16.846) 0:00:48.501 ******* 2026-01-13 00:57:17.328501 | orchestrator | 2026-01-13 00:57:17.328505 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-13 00:57:17.328508 | orchestrator | Tuesday 13 January 2026 00:56:22 +0000 (0:00:00.075) 0:00:48.577 ******* 2026-01-13 00:57:17.328512 | orchestrator | 2026-01-13 00:57:17.328516 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-13 00:57:17.328520 | orchestrator | Tuesday 13 January 2026 00:56:22 +0000 (0:00:00.081) 0:00:48.658 ******* 2026-01-13 00:57:17.328536 | orchestrator | 2026-01-13 00:57:17.328540 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-13 00:57:17.328544 | orchestrator | Tuesday 13 January 2026 00:56:22 +0000 (0:00:00.069) 0:00:48.727 ******* 2026-01-13 00:57:17.328548 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:57:17.328552 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:57:17.328556 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:57:17.328560 | orchestrator | 2026-01-13 00:57:17.328563 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:57:17.328567 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-13 00:57:17.328571 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-13 00:57:17.328575 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-13 00:57:17.328579 | orchestrator | 2026-01-13 00:57:17.328583 | orchestrator | 2026-01-13 00:57:17.328587 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:57:17.328590 | orchestrator | Tuesday 13 January 2026 00:57:16 +0000 (0:00:53.330) 0:01:42.058 ******* 2026-01-13 00:57:17.328597 | orchestrator | =============================================================================== 2026-01-13 00:57:17.328601 | orchestrator | horizon : Restart horizon container ------------------------------------ 53.33s 2026-01-13 00:57:17.328605 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.85s 2026-01-13 00:57:17.328608 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.17s 2026-01-13 00:57:17.328614 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.77s 2026-01-13 00:57:17.328617 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.32s 2026-01-13 00:57:17.328621 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.06s 2026-01-13 00:57:17.328625 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.72s 2026-01-13 00:57:17.328628 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.69s 2026-01-13 00:57:17.328632 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.65s 2026-01-13 00:57:17.328636 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.46s 2026-01-13 00:57:17.328640 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.13s 2026-01-13 00:57:17.328643 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.95s 2026-01-13 00:57:17.328647 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.84s 2026-01-13 00:57:17.328651 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.63s 2026-01-13 00:57:17.328654 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.61s 2026-01-13 00:57:17.328658 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2026-01-13 00:57:17.328662 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-01-13 00:57:17.328666 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.47s 2026-01-13 00:57:17.328669 | orchestrator | horizon : Update policy file name --------------------------------------- 0.44s 2026-01-13 00:57:17.328673 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.43s 2026-01-13 00:57:20.369829 | orchestrator | 2026-01-13 00:57:20 | INFO  | Task e2829daa-db1e-4082-8441-642a23e938b6 is in state SUCCESS 2026-01-13 00:57:20.377316 | orchestrator | 2026-01-13 00:57:20.377377 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-13 00:57:20.377386 | orchestrator | 2.16.14 2026-01-13 00:57:20.377394 | orchestrator | 2026-01-13 00:57:20.377401 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-13 00:57:20.377408 | orchestrator | 2026-01-13 00:57:20.377415 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-13 00:57:20.377422 | orchestrator | Tuesday 13 January 2026 00:55:11 +0000 (0:00:00.561) 0:00:00.561 ******* 2026-01-13 00:57:20.377428 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:57:20.377435 | orchestrator | 2026-01-13 00:57:20.377442 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-13 00:57:20.377448 | orchestrator | Tuesday 13 January 2026 00:55:12 +0000 (0:00:00.550) 0:00:01.111 ******* 2026-01-13 00:57:20.377455 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:57:20.377461 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.377468 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:57:20.377473 | orchestrator | 2026-01-13 00:57:20.377479 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-13 00:57:20.377487 | orchestrator | Tuesday 13 January 2026 00:55:12 +0000 (0:00:00.625) 0:00:01.737 ******* 2026-01-13 00:57:20.377493 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.377499 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:57:20.377595 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:57:20.377848 | orchestrator | 2026-01-13 00:57:20.377862 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-13 00:57:20.377869 | orchestrator | Tuesday 13 January 2026 00:55:12 +0000 (0:00:00.252) 0:00:01.989 ******* 2026-01-13 00:57:20.377876 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.377882 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:57:20.377910 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:57:20.377917 | orchestrator | 2026-01-13 00:57:20.377925 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-13 00:57:20.377932 | orchestrator | Tuesday 13 January 2026 00:55:13 +0000 (0:00:00.841) 0:00:02.831 ******* 2026-01-13 00:57:20.377940 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.377947 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:57:20.377954 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:57:20.377973 | orchestrator | 2026-01-13 00:57:20.377980 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-13 00:57:20.377987 | orchestrator | Tuesday 13 January 2026 00:55:14 +0000 (0:00:00.284) 0:00:03.116 ******* 2026-01-13 00:57:20.377995 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.378002 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:57:20.378008 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:57:20.378049 | orchestrator | 2026-01-13 00:57:20.378057 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-13 00:57:20.378064 | orchestrator | Tuesday 13 January 2026 00:55:14 +0000 (0:00:00.249) 0:00:03.365 ******* 2026-01-13 00:57:20.378072 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.378079 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:57:20.378086 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:57:20.378093 | orchestrator | 2026-01-13 00:57:20.378100 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-13 00:57:20.378107 | orchestrator | Tuesday 13 January 2026 00:55:14 +0000 (0:00:00.273) 0:00:03.639 ******* 2026-01-13 00:57:20.378114 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.378121 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.378127 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.378134 | orchestrator | 2026-01-13 00:57:20.378141 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-13 00:57:20.378148 | orchestrator | Tuesday 13 January 2026 00:55:15 +0000 (0:00:00.502) 0:00:04.142 ******* 2026-01-13 00:57:20.378155 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.378161 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:57:20.378212 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:57:20.378221 | orchestrator | 2026-01-13 00:57:20.378236 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-13 00:57:20.378243 | orchestrator | Tuesday 13 January 2026 00:55:15 +0000 (0:00:00.313) 0:00:04.456 ******* 2026-01-13 00:57:20.378250 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-13 00:57:20.378257 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-13 00:57:20.378263 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-13 00:57:20.378269 | orchestrator | 2026-01-13 00:57:20.378275 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-13 00:57:20.378282 | orchestrator | Tuesday 13 January 2026 00:55:16 +0000 (0:00:00.637) 0:00:05.093 ******* 2026-01-13 00:57:20.378288 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.378294 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:57:20.378372 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:57:20.378549 | orchestrator | 2026-01-13 00:57:20.378558 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-13 00:57:20.378565 | orchestrator | Tuesday 13 January 2026 00:55:16 +0000 (0:00:00.436) 0:00:05.529 ******* 2026-01-13 00:57:20.378572 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-13 00:57:20.378579 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-13 00:57:20.378595 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-13 00:57:20.378602 | orchestrator | 2026-01-13 00:57:20.378609 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-13 00:57:20.378616 | orchestrator | Tuesday 13 January 2026 00:55:18 +0000 (0:00:02.055) 0:00:07.585 ******* 2026-01-13 00:57:20.378622 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-13 00:57:20.378630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-13 00:57:20.378636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-13 00:57:20.378643 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.378650 | orchestrator | 2026-01-13 00:57:20.378687 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-13 00:57:20.378696 | orchestrator | Tuesday 13 January 2026 00:55:19 +0000 (0:00:00.657) 0:00:08.242 ******* 2026-01-13 00:57:20.378702 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.378710 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.378716 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.378722 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.378728 | orchestrator | 2026-01-13 00:57:20.378734 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-13 00:57:20.378741 | orchestrator | Tuesday 13 January 2026 00:55:20 +0000 (0:00:00.825) 0:00:09.068 ******* 2026-01-13 00:57:20.378749 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.378757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.378763 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.378770 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.378776 | orchestrator | 2026-01-13 00:57:20.378782 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-13 00:57:20.378789 | orchestrator | Tuesday 13 January 2026 00:55:20 +0000 (0:00:00.417) 0:00:09.486 ******* 2026-01-13 00:57:20.378803 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '000f2103ee99', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-13 00:55:17.152761', 'end': '2026-01-13 00:55:17.180036', 'delta': '0:00:00.027275', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['000f2103ee99'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-13 00:57:20.378818 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '84ac1be37239', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-13 00:55:17.884463', 'end': '2026-01-13 00:55:17.909365', 'delta': '0:00:00.024902', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['84ac1be37239'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-13 00:57:20.378863 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '01f2672e35a4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-13 00:55:18.404904', 'end': '2026-01-13 00:55:18.432868', 'delta': '0:00:00.027964', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01f2672e35a4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-13 00:57:20.378871 | orchestrator | 2026-01-13 00:57:20.378877 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-13 00:57:20.378883 | orchestrator | Tuesday 13 January 2026 00:55:20 +0000 (0:00:00.198) 0:00:09.685 ******* 2026-01-13 00:57:20.378892 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.378898 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:57:20.378904 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:57:20.378911 | orchestrator | 2026-01-13 00:57:20.378917 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-13 00:57:20.378923 | orchestrator | Tuesday 13 January 2026 00:55:21 +0000 (0:00:00.439) 0:00:10.124 ******* 2026-01-13 00:57:20.378929 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-13 00:57:20.378935 | orchestrator | 2026-01-13 00:57:20.378942 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-13 00:57:20.378947 | orchestrator | Tuesday 13 January 2026 00:55:22 +0000 (0:00:01.873) 0:00:11.998 ******* 2026-01-13 00:57:20.378954 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.378960 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.378966 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.378972 | orchestrator | 2026-01-13 00:57:20.378979 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-13 00:57:20.378985 | orchestrator | Tuesday 13 January 2026 00:55:23 +0000 (0:00:00.295) 0:00:12.294 ******* 2026-01-13 00:57:20.378991 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.378997 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.379003 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.379009 | orchestrator | 2026-01-13 00:57:20.379015 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-13 00:57:20.379021 | orchestrator | Tuesday 13 January 2026 00:55:23 +0000 (0:00:00.391) 0:00:12.685 ******* 2026-01-13 00:57:20.379027 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.379039 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.379045 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.379051 | orchestrator | 2026-01-13 00:57:20.379057 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-13 00:57:20.379063 | orchestrator | Tuesday 13 January 2026 00:55:24 +0000 (0:00:00.489) 0:00:13.174 ******* 2026-01-13 00:57:20.379070 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.379076 | orchestrator | 2026-01-13 00:57:20.379083 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-13 00:57:20.379089 | orchestrator | Tuesday 13 January 2026 00:55:24 +0000 (0:00:00.113) 0:00:13.288 ******* 2026-01-13 00:57:20.379096 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.379102 | orchestrator | 2026-01-13 00:57:20.379108 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-13 00:57:20.379114 | orchestrator | Tuesday 13 January 2026 00:55:24 +0000 (0:00:00.233) 0:00:13.521 ******* 2026-01-13 00:57:20.379125 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.379131 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.379137 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.379144 | orchestrator | 2026-01-13 00:57:20.379151 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-13 00:57:20.379157 | orchestrator | Tuesday 13 January 2026 00:55:24 +0000 (0:00:00.285) 0:00:13.807 ******* 2026-01-13 00:57:20.379164 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.379170 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.379176 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.379182 | orchestrator | 2026-01-13 00:57:20.379189 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-13 00:57:20.379195 | orchestrator | Tuesday 13 January 2026 00:55:25 +0000 (0:00:00.310) 0:00:14.117 ******* 2026-01-13 00:57:20.379201 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.379207 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.379212 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.379218 | orchestrator | 2026-01-13 00:57:20.379224 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-13 00:57:20.379230 | orchestrator | Tuesday 13 January 2026 00:55:25 +0000 (0:00:00.528) 0:00:14.646 ******* 2026-01-13 00:57:20.379237 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.379243 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.379249 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.379256 | orchestrator | 2026-01-13 00:57:20.379262 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-13 00:57:20.379268 | orchestrator | Tuesday 13 January 2026 00:55:25 +0000 (0:00:00.311) 0:00:14.957 ******* 2026-01-13 00:57:20.379274 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.379281 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.379286 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.379293 | orchestrator | 2026-01-13 00:57:20.379299 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-13 00:57:20.379305 | orchestrator | Tuesday 13 January 2026 00:55:26 +0000 (0:00:00.307) 0:00:15.265 ******* 2026-01-13 00:57:20.379312 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.379317 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.379323 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.379360 | orchestrator | 2026-01-13 00:57:20.379368 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-13 00:57:20.379373 | orchestrator | Tuesday 13 January 2026 00:55:26 +0000 (0:00:00.322) 0:00:15.587 ******* 2026-01-13 00:57:20.379379 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.379385 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.379391 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.379397 | orchestrator | 2026-01-13 00:57:20.379403 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-13 00:57:20.379410 | orchestrator | Tuesday 13 January 2026 00:55:27 +0000 (0:00:00.564) 0:00:16.151 ******* 2026-01-13 00:57:20.379424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b9be54a9--cd9c--568c--9220--61b18da052d9-osd--block--b9be54a9--cd9c--568c--9220--61b18da052d9', 'dm-uuid-LVM-tI9LueIqoznnHWvc67dyxcKb2DRlZadxhD8MTBDVbVSuVr75iGA0ykjhJTLhvbvd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--03961d85--1922--5669--8251--0ccc6cca9fac-osd--block--03961d85--1922--5669--8251--0ccc6cca9fac', 'dm-uuid-LVM-GHCgDfhjqHbxrN6X57Au2JxG0UkZVV6SYAZhc8KzmZuq1WeEWDc3uD3fnm7izynW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5-osd--block--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5', 'dm-uuid-LVM-hgtH6tpzhnx2QQztd0bAxtrFNuWF2rUJ5NeecY0iboAd4WuXz2J4zhiyU5ciBGer'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b3e8737--91e3--53c0--9b3a--5288a4111b63-osd--block--2b3e8737--91e3--53c0--9b3a--5288a4111b63', 'dm-uuid-LVM-xy47BTMmezzKuhVgeOBsrflxsh2nMMZxq1yfesNVZn38knC8hXtIcPF2l4aSiZtk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e91d200a--cf56--55df--b2f8--08f15361112f-osd--block--e91d200a--cf56--55df--b2f8--08f15361112f', 'dm-uuid-LVM-xweh1YC5RDiVWhdx1PKskF5JCr6mh2cIruH8cXC0TzdCdfDxhyAoa4ykUz6BhD3x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ebda4f6--7b50--59b0--8273--b291dd7d1677-osd--block--7ebda4f6--7b50--59b0--8273--b291dd7d1677', 'dm-uuid-LVM-qXJ0ZdEvWcXk2vDlKmzolqGpgokmwsYUBrLx3bsgLllWFSDGo0grKruMjv28g8BC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part1', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part14', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part15', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part16', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b9be54a9--cd9c--568c--9220--61b18da052d9-osd--block--b9be54a9--cd9c--568c--9220--61b18da052d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sZILOh-5tjd-Njbz-niJz-MLcH-ddwd-N90s5N', 'scsi-0QEMU_QEMU_HARDDISK_49cd33e4-72cd-4f3f-940d-55c9f0f00a98', 'scsi-SQEMU_QEMU_HARDDISK_49cd33e4-72cd-4f3f-940d-55c9f0f00a98'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--03961d85--1922--5669--8251--0ccc6cca9fac-osd--block--03961d85--1922--5669--8251--0ccc6cca9fac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uzsK8R-6Gxn-oDpB-2Hms-tH0u-G7ac-aGEaLg', 'scsi-0QEMU_QEMU_HARDDISK_1f00cc32-4927-4d99-9c1e-b649b1d1f573', 'scsi-SQEMU_QEMU_HARDDISK_1f00cc32-4927-4d99-9c1e-b649b1d1f573'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a292857-8cd9-4a14-95ba-a5d022f4a90e', 'scsi-SQEMU_QEMU_HARDDISK_0a292857-8cd9-4a14-95ba-a5d022f4a90e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-13 00:57:20.379816 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.379830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part1', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part14', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part15', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part16', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part1', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part14', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part15', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part16', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5-osd--block--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6DKvSU-Cdbw-CbUk-lrwG-gfma-BvTf-I6WE2Y', 'scsi-0QEMU_QEMU_HARDDISK_6ad71b9e-76db-4ac5-b372-050f59253056', 'scsi-SQEMU_QEMU_HARDDISK_6ad71b9e-76db-4ac5-b372-050f59253056'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e91d200a--cf56--55df--b2f8--08f15361112f-osd--block--e91d200a--cf56--55df--b2f8--08f15361112f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FH2r3c-Cf2J-ryeq-ItYe-hsKy-vARI-3t2Zip', 'scsi-0QEMU_QEMU_HARDDISK_79922d84-0445-4535-976b-32e74e35a748', 'scsi-SQEMU_QEMU_HARDDISK_79922d84-0445-4535-976b-32e74e35a748'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2b3e8737--91e3--53c0--9b3a--5288a4111b63-osd--block--2b3e8737--91e3--53c0--9b3a--5288a4111b63'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-U2UOZW-SjBW-buGp-55CV-6Kqk-QEzG-AXKRcv', 'scsi-0QEMU_QEMU_HARDDISK_9db8234e-f6a8-4211-a809-87a509109e78', 'scsi-SQEMU_QEMU_HARDDISK_9db8234e-f6a8-4211-a809-87a509109e78'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7ebda4f6--7b50--59b0--8273--b291dd7d1677-osd--block--7ebda4f6--7b50--59b0--8273--b291dd7d1677'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xHcG0E-vZHx-JCHk-rp13-0i6I-R8mG-hkrVOO', 'scsi-0QEMU_QEMU_HARDDISK_f69e02e7-d854-4ded-bb8d-51d0e0400336', 'scsi-SQEMU_QEMU_HARDDISK_f69e02e7-d854-4ded-bb8d-51d0e0400336'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0bff01-3898-4d25-903e-2ecdf087243c', 'scsi-SQEMU_QEMU_HARDDISK_5c0bff01-3898-4d25-903e-2ecdf087243c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5295d09e-fddd-4452-8a25-9ba23e2b95ae', 'scsi-SQEMU_QEMU_HARDDISK_5295d09e-fddd-4452-8a25-9ba23e2b95ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379912 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.379922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-13 00:57:20.379929 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.379935 | orchestrator | 2026-01-13 00:57:20.379942 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-13 00:57:20.379948 | orchestrator | Tuesday 13 January 2026 00:55:27 +0000 (0:00:00.720) 0:00:16.872 ******* 2026-01-13 00:57:20.379955 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b9be54a9--cd9c--568c--9220--61b18da052d9-osd--block--b9be54a9--cd9c--568c--9220--61b18da052d9', 'dm-uuid-LVM-tI9LueIqoznnHWvc67dyxcKb2DRlZadxhD8MTBDVbVSuVr75iGA0ykjhJTLhvbvd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.379962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--03961d85--1922--5669--8251--0ccc6cca9fac-osd--block--03961d85--1922--5669--8251--0ccc6cca9fac', 'dm-uuid-LVM-GHCgDfhjqHbxrN6X57Au2JxG0UkZVV6SYAZhc8KzmZuq1WeEWDc3uD3fnm7izynW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.379970 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.379977 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.379987 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.379998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380003 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380009 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380016 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5-osd--block--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5', 'dm-uuid-LVM-hgtH6tpzhnx2QQztd0bAxtrFNuWF2rUJ5NeecY0iboAd4WuXz2J4zhiyU5ciBGer'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380025 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380080 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b3e8737--91e3--53c0--9b3a--5288a4111b63-osd--block--2b3e8737--91e3--53c0--9b3a--5288a4111b63', 'dm-uuid-LVM-xy47BTMmezzKuhVgeOBsrflxsh2nMMZxq1yfesNVZn38knC8hXtIcPF2l4aSiZtk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380092 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380099 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380111 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part1', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part14', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part15', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part16', 'scsi-SQEMU_QEMU_HARDDISK_ffeaaf24-9754-44c8-bb36-eb3a5d2d5315-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380123 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380135 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b9be54a9--cd9c--568c--9220--61b18da052d9-osd--block--b9be54a9--cd9c--568c--9220--61b18da052d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sZILOh-5tjd-Njbz-niJz-MLcH-ddwd-N90s5N', 'scsi-0QEMU_QEMU_HARDDISK_49cd33e4-72cd-4f3f-940d-55c9f0f00a98', 'scsi-SQEMU_QEMU_HARDDISK_49cd33e4-72cd-4f3f-940d-55c9f0f00a98'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380142 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--03961d85--1922--5669--8251--0ccc6cca9fac-osd--block--03961d85--1922--5669--8251--0ccc6cca9fac'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uzsK8R-6Gxn-oDpB-2Hms-tH0u-G7ac-aGEaLg', 'scsi-0QEMU_QEMU_HARDDISK_1f00cc32-4927-4d99-9c1e-b649b1d1f573', 'scsi-SQEMU_QEMU_HARDDISK_1f00cc32-4927-4d99-9c1e-b649b1d1f573'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380159 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380172 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0a292857-8cd9-4a14-95ba-a5d022f4a90e', 'scsi-SQEMU_QEMU_HARDDISK_0a292857-8cd9-4a14-95ba-a5d022f4a90e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380183 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380198 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380203 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.380210 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380219 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380235 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part1', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part14', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part15', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part16', 'scsi-SQEMU_QEMU_HARDDISK_5f6d3b65-3844-4001-8889-d6deb3f0644d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380242 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e91d200a--cf56--55df--b2f8--08f15361112f-osd--block--e91d200a--cf56--55df--b2f8--08f15361112f', 'dm-uuid-LVM-xweh1YC5RDiVWhdx1PKskF5JCr6mh2cIruH8cXC0TzdCdfDxhyAoa4ykUz6BhD3x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380251 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5-osd--block--11aa5137--b5aa--5373--b4c1--0bd5a429c1a5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6DKvSU-Cdbw-CbUk-lrwG-gfma-BvTf-I6WE2Y', 'scsi-0QEMU_QEMU_HARDDISK_6ad71b9e-76db-4ac5-b372-050f59253056', 'scsi-SQEMU_QEMU_HARDDISK_6ad71b9e-76db-4ac5-b372-050f59253056'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380261 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ebda4f6--7b50--59b0--8273--b291dd7d1677-osd--block--7ebda4f6--7b50--59b0--8273--b291dd7d1677', 'dm-uuid-LVM-qXJ0ZdEvWcXk2vDlKmzolqGpgokmwsYUBrLx3bsgLllWFSDGo0grKruMjv28g8BC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380272 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2b3e8737--91e3--53c0--9b3a--5288a4111b63-osd--block--2b3e8737--91e3--53c0--9b3a--5288a4111b63'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-U2UOZW-SjBW-buGp-55CV-6Kqk-QEzG-AXKRcv', 'scsi-0QEMU_QEMU_HARDDISK_9db8234e-f6a8-4211-a809-87a509109e78', 'scsi-SQEMU_QEMU_HARDDISK_9db8234e-f6a8-4211-a809-87a509109e78'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380279 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380285 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0bff01-3898-4d25-903e-2ecdf087243c', 'scsi-SQEMU_QEMU_HARDDISK_5c0bff01-3898-4d25-903e-2ecdf087243c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380291 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380304 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380311 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380318 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.380328 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380335 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380342 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380349 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380363 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380373 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part1', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part14', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part15', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part16', 'scsi-SQEMU_QEMU_HARDDISK_306cfbe9-242f-441d-bc49-37fa1b1f4569-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380380 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e91d200a--cf56--55df--b2f8--08f15361112f-osd--block--e91d200a--cf56--55df--b2f8--08f15361112f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FH2r3c-Cf2J-ryeq-ItYe-hsKy-vARI-3t2Zip', 'scsi-0QEMU_QEMU_HARDDISK_79922d84-0445-4535-976b-32e74e35a748', 'scsi-SQEMU_QEMU_HARDDISK_79922d84-0445-4535-976b-32e74e35a748'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380394 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7ebda4f6--7b50--59b0--8273--b291dd7d1677-osd--block--7ebda4f6--7b50--59b0--8273--b291dd7d1677'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xHcG0E-vZHx-JCHk-rp13-0i6I-R8mG-hkrVOO', 'scsi-0QEMU_QEMU_HARDDISK_f69e02e7-d854-4ded-bb8d-51d0e0400336', 'scsi-SQEMU_QEMU_HARDDISK_f69e02e7-d854-4ded-bb8d-51d0e0400336'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380401 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5295d09e-fddd-4452-8a25-9ba23e2b95ae', 'scsi-SQEMU_QEMU_HARDDISK_5295d09e-fddd-4452-8a25-9ba23e2b95ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380413 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-13-00-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-13 00:57:20.380420 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.380426 | orchestrator | 2026-01-13 00:57:20.380433 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-13 00:57:20.380440 | orchestrator | Tuesday 13 January 2026 00:55:28 +0000 (0:00:00.562) 0:00:17.434 ******* 2026-01-13 00:57:20.380447 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.380453 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:57:20.380460 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:57:20.380466 | orchestrator | 2026-01-13 00:57:20.380472 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-13 00:57:20.380479 | orchestrator | Tuesday 13 January 2026 00:55:29 +0000 (0:00:00.656) 0:00:18.090 ******* 2026-01-13 00:57:20.380484 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.380489 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:57:20.380496 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:57:20.380502 | orchestrator | 2026-01-13 00:57:20.380508 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-13 00:57:20.380515 | orchestrator | Tuesday 13 January 2026 00:55:29 +0000 (0:00:00.541) 0:00:18.632 ******* 2026-01-13 00:57:20.380799 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.380828 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:57:20.380836 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:57:20.380844 | orchestrator | 2026-01-13 00:57:20.380852 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-13 00:57:20.380859 | orchestrator | Tuesday 13 January 2026 00:55:30 +0000 (0:00:00.613) 0:00:19.246 ******* 2026-01-13 00:57:20.380866 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.380875 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.380882 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.380889 | orchestrator | 2026-01-13 00:57:20.380896 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-13 00:57:20.380903 | orchestrator | Tuesday 13 January 2026 00:55:30 +0000 (0:00:00.310) 0:00:19.556 ******* 2026-01-13 00:57:20.380909 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.380915 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.380921 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.380927 | orchestrator | 2026-01-13 00:57:20.380934 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-13 00:57:20.380940 | orchestrator | Tuesday 13 January 2026 00:55:30 +0000 (0:00:00.429) 0:00:19.985 ******* 2026-01-13 00:57:20.380947 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.380953 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.380960 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.380966 | orchestrator | 2026-01-13 00:57:20.380972 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-13 00:57:20.380979 | orchestrator | Tuesday 13 January 2026 00:55:31 +0000 (0:00:00.547) 0:00:20.533 ******* 2026-01-13 00:57:20.380986 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-13 00:57:20.380993 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-13 00:57:20.380999 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-13 00:57:20.381006 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-13 00:57:20.381012 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-13 00:57:20.381018 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-13 00:57:20.381024 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-13 00:57:20.381035 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-13 00:57:20.381042 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-13 00:57:20.381048 | orchestrator | 2026-01-13 00:57:20.381055 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-13 00:57:20.381061 | orchestrator | Tuesday 13 January 2026 00:55:32 +0000 (0:00:01.011) 0:00:21.545 ******* 2026-01-13 00:57:20.381068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-13 00:57:20.381074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-13 00:57:20.381080 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-13 00:57:20.381086 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.381093 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-13 00:57:20.381099 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-13 00:57:20.381106 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-13 00:57:20.381112 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.381118 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-13 00:57:20.381124 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-13 00:57:20.381130 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-13 00:57:20.381136 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.381143 | orchestrator | 2026-01-13 00:57:20.381149 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-13 00:57:20.381156 | orchestrator | Tuesday 13 January 2026 00:55:32 +0000 (0:00:00.370) 0:00:21.915 ******* 2026-01-13 00:57:20.381163 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 00:57:20.381175 | orchestrator | 2026-01-13 00:57:20.381181 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-13 00:57:20.381188 | orchestrator | Tuesday 13 January 2026 00:55:33 +0000 (0:00:00.799) 0:00:22.715 ******* 2026-01-13 00:57:20.381203 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.381209 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.381216 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.381222 | orchestrator | 2026-01-13 00:57:20.381228 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-13 00:57:20.381234 | orchestrator | Tuesday 13 January 2026 00:55:33 +0000 (0:00:00.303) 0:00:23.019 ******* 2026-01-13 00:57:20.381240 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.381246 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.381252 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.381258 | orchestrator | 2026-01-13 00:57:20.381265 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-13 00:57:20.381272 | orchestrator | Tuesday 13 January 2026 00:55:34 +0000 (0:00:00.379) 0:00:23.398 ******* 2026-01-13 00:57:20.381277 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.381283 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.381289 | orchestrator | skipping: [testbed-node-5] 2026-01-13 00:57:20.381294 | orchestrator | 2026-01-13 00:57:20.381299 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-13 00:57:20.381305 | orchestrator | Tuesday 13 January 2026 00:55:34 +0000 (0:00:00.257) 0:00:23.655 ******* 2026-01-13 00:57:20.381311 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.381318 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:57:20.381324 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:57:20.381331 | orchestrator | 2026-01-13 00:57:20.381337 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-13 00:57:20.381344 | orchestrator | Tuesday 13 January 2026 00:55:35 +0000 (0:00:00.495) 0:00:24.151 ******* 2026-01-13 00:57:20.381351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:57:20.381357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:57:20.381364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:57:20.381370 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.381376 | orchestrator | 2026-01-13 00:57:20.381382 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-13 00:57:20.381389 | orchestrator | Tuesday 13 January 2026 00:55:35 +0000 (0:00:00.345) 0:00:24.497 ******* 2026-01-13 00:57:20.381395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:57:20.381402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:57:20.381408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:57:20.381414 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.381421 | orchestrator | 2026-01-13 00:57:20.381429 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-13 00:57:20.381436 | orchestrator | Tuesday 13 January 2026 00:55:35 +0000 (0:00:00.311) 0:00:24.808 ******* 2026-01-13 00:57:20.381444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-13 00:57:20.381450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-13 00:57:20.381456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-13 00:57:20.381464 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.381472 | orchestrator | 2026-01-13 00:57:20.381479 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-13 00:57:20.381487 | orchestrator | Tuesday 13 January 2026 00:55:36 +0000 (0:00:00.293) 0:00:25.101 ******* 2026-01-13 00:57:20.381494 | orchestrator | ok: [testbed-node-3] 2026-01-13 00:57:20.381500 | orchestrator | ok: [testbed-node-4] 2026-01-13 00:57:20.381512 | orchestrator | ok: [testbed-node-5] 2026-01-13 00:57:20.381533 | orchestrator | 2026-01-13 00:57:20.381540 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-13 00:57:20.381548 | orchestrator | Tuesday 13 January 2026 00:55:36 +0000 (0:00:00.248) 0:00:25.350 ******* 2026-01-13 00:57:20.381556 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-13 00:57:20.381562 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-13 00:57:20.381577 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-13 00:57:20.381584 | orchestrator | 2026-01-13 00:57:20.381592 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-13 00:57:20.381598 | orchestrator | Tuesday 13 January 2026 00:55:36 +0000 (0:00:00.451) 0:00:25.801 ******* 2026-01-13 00:57:20.381605 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-13 00:57:20.381612 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-13 00:57:20.381619 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-13 00:57:20.381625 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-13 00:57:20.381631 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-13 00:57:20.381638 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-13 00:57:20.381645 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-13 00:57:20.381651 | orchestrator | 2026-01-13 00:57:20.381657 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-13 00:57:20.381664 | orchestrator | Tuesday 13 January 2026 00:55:37 +0000 (0:00:00.820) 0:00:26.622 ******* 2026-01-13 00:57:20.381670 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-13 00:57:20.381676 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-13 00:57:20.381683 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-13 00:57:20.381689 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-13 00:57:20.381708 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-13 00:57:20.381720 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-13 00:57:20.381733 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-13 00:57:20.381740 | orchestrator | 2026-01-13 00:57:20.381746 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-13 00:57:20.381753 | orchestrator | Tuesday 13 January 2026 00:55:39 +0000 (0:00:01.620) 0:00:28.242 ******* 2026-01-13 00:57:20.381759 | orchestrator | skipping: [testbed-node-3] 2026-01-13 00:57:20.381765 | orchestrator | skipping: [testbed-node-4] 2026-01-13 00:57:20.381771 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-13 00:57:20.381777 | orchestrator | 2026-01-13 00:57:20.381783 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-13 00:57:20.381789 | orchestrator | Tuesday 13 January 2026 00:55:39 +0000 (0:00:00.376) 0:00:28.619 ******* 2026-01-13 00:57:20.381796 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-13 00:57:20.381804 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-13 00:57:20.381811 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-13 00:57:20.381823 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-13 00:57:20.381830 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-13 00:57:20.381836 | orchestrator | 2026-01-13 00:57:20.381842 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-13 00:57:20.381849 | orchestrator | Tuesday 13 January 2026 00:56:26 +0000 (0:00:46.576) 0:01:15.195 ******* 2026-01-13 00:57:20.381855 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.381861 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.381867 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.381873 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.381879 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.381889 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.381894 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-13 00:57:20.381900 | orchestrator | 2026-01-13 00:57:20.381905 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-13 00:57:20.381911 | orchestrator | Tuesday 13 January 2026 00:56:49 +0000 (0:00:23.636) 0:01:38.832 ******* 2026-01-13 00:57:20.381917 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.381923 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.381929 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.381935 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.381941 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.381947 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.381953 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-13 00:57:20.381959 | orchestrator | 2026-01-13 00:57:20.381965 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-13 00:57:20.381972 | orchestrator | Tuesday 13 January 2026 00:57:01 +0000 (0:00:12.096) 0:01:50.929 ******* 2026-01-13 00:57:20.381978 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.381985 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-13 00:57:20.381991 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-13 00:57:20.381998 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.382004 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-13 00:57:20.382065 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-13 00:57:20.382077 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.382085 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-13 00:57:20.382101 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-13 00:57:20.382109 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.382116 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-13 00:57:20.382122 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-13 00:57:20.382128 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.382136 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-13 00:57:20.382143 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-13 00:57:20.382152 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-13 00:57:20.382161 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-13 00:57:20.382169 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-13 00:57:20.382177 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-13 00:57:20.382187 | orchestrator | 2026-01-13 00:57:20.382197 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:57:20.382205 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-13 00:57:20.382214 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-13 00:57:20.382223 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-13 00:57:20.382233 | orchestrator | 2026-01-13 00:57:20.382241 | orchestrator | 2026-01-13 00:57:20.382250 | orchestrator | 2026-01-13 00:57:20.382258 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:57:20.382266 | orchestrator | Tuesday 13 January 2026 00:57:19 +0000 (0:00:17.775) 0:02:08.704 ******* 2026-01-13 00:57:20.382274 | orchestrator | =============================================================================== 2026-01-13 00:57:20.382282 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.58s 2026-01-13 00:57:20.382290 | orchestrator | generate keys ---------------------------------------------------------- 23.64s 2026-01-13 00:57:20.382298 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.78s 2026-01-13 00:57:20.382306 | orchestrator | get keys from monitors ------------------------------------------------- 12.10s 2026-01-13 00:57:20.382315 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.06s 2026-01-13 00:57:20.382322 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.87s 2026-01-13 00:57:20.382331 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.62s 2026-01-13 00:57:20.382339 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.01s 2026-01-13 00:57:20.382352 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.84s 2026-01-13 00:57:20.382359 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.83s 2026-01-13 00:57:20.382365 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.82s 2026-01-13 00:57:20.382372 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.80s 2026-01-13 00:57:20.382378 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.72s 2026-01-13 00:57:20.382386 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.66s 2026-01-13 00:57:20.382393 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2026-01-13 00:57:20.382401 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2026-01-13 00:57:20.382415 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.63s 2026-01-13 00:57:20.382453 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.61s 2026-01-13 00:57:20.382459 | orchestrator | ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks --- 0.56s 2026-01-13 00:57:20.382465 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.56s 2026-01-13 00:57:20.382471 | orchestrator | 2026-01-13 00:57:20 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:20.382477 | orchestrator | 2026-01-13 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:23.437774 | orchestrator | 2026-01-13 00:57:23 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:23.440165 | orchestrator | 2026-01-13 00:57:23 | INFO  | Task c115080c-bb95-4c64-9141-3881243c5ded is in state STARTED 2026-01-13 00:57:23.440209 | orchestrator | 2026-01-13 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:26.484070 | orchestrator | 2026-01-13 00:57:26 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:26.485947 | orchestrator | 2026-01-13 00:57:26 | INFO  | Task c115080c-bb95-4c64-9141-3881243c5ded is in state STARTED 2026-01-13 00:57:26.485996 | orchestrator | 2026-01-13 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:29.532632 | orchestrator | 2026-01-13 00:57:29 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:29.535431 | orchestrator | 2026-01-13 00:57:29 | INFO  | Task c115080c-bb95-4c64-9141-3881243c5ded is in state STARTED 2026-01-13 00:57:29.535484 | orchestrator | 2026-01-13 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:32.579201 | orchestrator | 2026-01-13 00:57:32 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:32.581254 | orchestrator | 2026-01-13 00:57:32 | INFO  | Task c115080c-bb95-4c64-9141-3881243c5ded is in state STARTED 2026-01-13 00:57:32.581597 | orchestrator | 2026-01-13 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:35.645801 | orchestrator | 2026-01-13 00:57:35 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:35.647536 | orchestrator | 2026-01-13 00:57:35 | INFO  | Task c115080c-bb95-4c64-9141-3881243c5ded is in state STARTED 2026-01-13 00:57:35.647774 | orchestrator | 2026-01-13 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:38.697073 | orchestrator | 2026-01-13 00:57:38 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:38.699593 | orchestrator | 2026-01-13 00:57:38 | INFO  | Task c115080c-bb95-4c64-9141-3881243c5ded is in state STARTED 2026-01-13 00:57:38.699697 | orchestrator | 2026-01-13 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:41.750202 | orchestrator | 2026-01-13 00:57:41 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:41.752194 | orchestrator | 2026-01-13 00:57:41 | INFO  | Task c115080c-bb95-4c64-9141-3881243c5ded is in state STARTED 2026-01-13 00:57:41.752251 | orchestrator | 2026-01-13 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:44.807680 | orchestrator | 2026-01-13 00:57:44 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:44.810935 | orchestrator | 2026-01-13 00:57:44 | INFO  | Task c115080c-bb95-4c64-9141-3881243c5ded is in state STARTED 2026-01-13 00:57:44.811020 | orchestrator | 2026-01-13 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:47.861216 | orchestrator | 2026-01-13 00:57:47 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:47.862476 | orchestrator | 2026-01-13 00:57:47 | INFO  | Task c115080c-bb95-4c64-9141-3881243c5ded is in state STARTED 2026-01-13 00:57:47.862552 | orchestrator | 2026-01-13 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:50.912248 | orchestrator | 2026-01-13 00:57:50 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:50.916539 | orchestrator | 2026-01-13 00:57:50 | INFO  | Task c115080c-bb95-4c64-9141-3881243c5ded is in state STARTED 2026-01-13 00:57:50.917205 | orchestrator | 2026-01-13 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:53.971421 | orchestrator | 2026-01-13 00:57:53 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:53.972649 | orchestrator | 2026-01-13 00:57:53 | INFO  | Task c115080c-bb95-4c64-9141-3881243c5ded is in state STARTED 2026-01-13 00:57:53.972697 | orchestrator | 2026-01-13 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:57:57.025763 | orchestrator | 2026-01-13 00:57:57 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:57:57.030372 | orchestrator | 2026-01-13 00:57:57 | INFO  | Task c115080c-bb95-4c64-9141-3881243c5ded is in state STARTED 2026-01-13 00:57:57.030428 | orchestrator | 2026-01-13 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:00.073936 | orchestrator | 2026-01-13 00:58:00 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:58:00.075511 | orchestrator | 2026-01-13 00:58:00 | INFO  | Task c115080c-bb95-4c64-9141-3881243c5ded is in state SUCCESS 2026-01-13 00:58:00.077310 | orchestrator | 2026-01-13 00:58:00 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:00.077403 | orchestrator | 2026-01-13 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:03.130778 | orchestrator | 2026-01-13 00:58:03 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:58:03.132357 | orchestrator | 2026-01-13 00:58:03 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:03.132496 | orchestrator | 2026-01-13 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:06.191945 | orchestrator | 2026-01-13 00:58:06 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:58:06.193562 | orchestrator | 2026-01-13 00:58:06 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:06.193678 | orchestrator | 2026-01-13 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:09.245734 | orchestrator | 2026-01-13 00:58:09 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:58:09.247828 | orchestrator | 2026-01-13 00:58:09 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:09.248017 | orchestrator | 2026-01-13 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:12.294133 | orchestrator | 2026-01-13 00:58:12 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:58:12.294720 | orchestrator | 2026-01-13 00:58:12 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:12.294773 | orchestrator | 2026-01-13 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:15.349818 | orchestrator | 2026-01-13 00:58:15 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state STARTED 2026-01-13 00:58:15.351032 | orchestrator | 2026-01-13 00:58:15 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:15.351266 | orchestrator | 2026-01-13 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:18.396310 | orchestrator | 2026-01-13 00:58:18 | INFO  | Task d32e32cc-419a-40cc-bd27-890c92e82cbf is in state SUCCESS 2026-01-13 00:58:18.397296 | orchestrator | 2026-01-13 00:58:18.397350 | orchestrator | 2026-01-13 00:58:18.397370 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-13 00:58:18.397454 | orchestrator | 2026-01-13 00:58:18.397472 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-13 00:58:18.397489 | orchestrator | Tuesday 13 January 2026 00:57:24 +0000 (0:00:00.114) 0:00:00.114 ******* 2026-01-13 00:58:18.398183 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-13 00:58:18.398282 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-13 00:58:18.398293 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-13 00:58:18.398300 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-13 00:58:18.398307 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-13 00:58:18.398333 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-13 00:58:18.398342 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-13 00:58:18.398356 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-13 00:58:18.398372 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-13 00:58:18.398407 | orchestrator | 2026-01-13 00:58:18.398419 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-13 00:58:18.398429 | orchestrator | Tuesday 13 January 2026 00:57:28 +0000 (0:00:04.439) 0:00:04.553 ******* 2026-01-13 00:58:18.398439 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-13 00:58:18.398449 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-13 00:58:18.398459 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-13 00:58:18.398468 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-13 00:58:18.398478 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-13 00:58:18.398488 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-13 00:58:18.398498 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-13 00:58:18.398508 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-13 00:58:18.398518 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-13 00:58:18.398528 | orchestrator | 2026-01-13 00:58:18.398538 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-13 00:58:18.398548 | orchestrator | Tuesday 13 January 2026 00:57:33 +0000 (0:00:04.412) 0:00:08.966 ******* 2026-01-13 00:58:18.398561 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-13 00:58:18.398573 | orchestrator | 2026-01-13 00:58:18.398583 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-13 00:58:18.398593 | orchestrator | Tuesday 13 January 2026 00:57:34 +0000 (0:00:01.028) 0:00:09.994 ******* 2026-01-13 00:58:18.398625 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-13 00:58:18.398632 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-13 00:58:18.398639 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-13 00:58:18.398655 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-13 00:58:18.398661 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-13 00:58:18.398667 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-13 00:58:18.398674 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-13 00:58:18.398680 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-13 00:58:18.398686 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-13 00:58:18.398693 | orchestrator | 2026-01-13 00:58:18.398699 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-13 00:58:18.398705 | orchestrator | Tuesday 13 January 2026 00:57:47 +0000 (0:00:13.266) 0:00:23.261 ******* 2026-01-13 00:58:18.398711 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-13 00:58:18.398718 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-13 00:58:18.398724 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-13 00:58:18.398730 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-13 00:58:18.398813 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-13 00:58:18.398822 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-13 00:58:18.398829 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-13 00:58:18.398835 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-13 00:58:18.398841 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-13 00:58:18.398848 | orchestrator | 2026-01-13 00:58:18.398854 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-13 00:58:18.398861 | orchestrator | Tuesday 13 January 2026 00:57:51 +0000 (0:00:04.075) 0:00:27.336 ******* 2026-01-13 00:58:18.398867 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-13 00:58:18.398880 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-13 00:58:18.398886 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-13 00:58:18.398893 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-13 00:58:18.398899 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-13 00:58:18.398905 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-13 00:58:18.398911 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-13 00:58:18.398918 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-13 00:58:18.398925 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-13 00:58:18.398931 | orchestrator | 2026-01-13 00:58:18.398937 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:58:18.398943 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:58:18.398950 | orchestrator | 2026-01-13 00:58:18.398956 | orchestrator | 2026-01-13 00:58:18.398972 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:58:18.398979 | orchestrator | Tuesday 13 January 2026 00:57:58 +0000 (0:00:07.208) 0:00:34.545 ******* 2026-01-13 00:58:18.398985 | orchestrator | =============================================================================== 2026-01-13 00:58:18.398991 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.27s 2026-01-13 00:58:18.398997 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.21s 2026-01-13 00:58:18.399004 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.44s 2026-01-13 00:58:18.399010 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.41s 2026-01-13 00:58:18.399021 | orchestrator | Check if target directories exist --------------------------------------- 4.08s 2026-01-13 00:58:18.399032 | orchestrator | Create share directory -------------------------------------------------- 1.03s 2026-01-13 00:58:18.399042 | orchestrator | 2026-01-13 00:58:18.399051 | orchestrator | 2026-01-13 00:58:18.399062 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 00:58:18.399073 | orchestrator | 2026-01-13 00:58:18.399084 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 00:58:18.399093 | orchestrator | Tuesday 13 January 2026 00:55:34 +0000 (0:00:00.256) 0:00:00.256 ******* 2026-01-13 00:58:18.399102 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:58:18.399113 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:58:18.399125 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:58:18.399136 | orchestrator | 2026-01-13 00:58:18.399146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 00:58:18.399158 | orchestrator | Tuesday 13 January 2026 00:55:34 +0000 (0:00:00.232) 0:00:00.488 ******* 2026-01-13 00:58:18.399169 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-13 00:58:18.399191 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-13 00:58:18.399202 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-13 00:58:18.399213 | orchestrator | 2026-01-13 00:58:18.399220 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-13 00:58:18.399226 | orchestrator | 2026-01-13 00:58:18.399233 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-13 00:58:18.399239 | orchestrator | Tuesday 13 January 2026 00:55:34 +0000 (0:00:00.355) 0:00:00.843 ******* 2026-01-13 00:58:18.399245 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:58:18.399252 | orchestrator | 2026-01-13 00:58:18.399258 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-13 00:58:18.399265 | orchestrator | Tuesday 13 January 2026 00:55:35 +0000 (0:00:00.457) 0:00:01.301 ******* 2026-01-13 00:58:18.399331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.399351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.399367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.399376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-13 00:58:18.399473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-13 00:58:18.399489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-13 00:58:18.399497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.399517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.399524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.399531 | orchestrator | 2026-01-13 00:58:18.399538 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-13 00:58:18.399545 | orchestrator | Tuesday 13 January 2026 00:55:37 +0000 (0:00:01.705) 0:00:03.006 ******* 2026-01-13 00:58:18.399552 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.399559 | orchestrator | 2026-01-13 00:58:18.399566 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-13 00:58:18.399572 | orchestrator | Tuesday 13 January 2026 00:55:37 +0000 (0:00:00.112) 0:00:03.118 ******* 2026-01-13 00:58:18.399579 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.399586 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:58:18.399592 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:58:18.399599 | orchestrator | 2026-01-13 00:58:18.399605 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-13 00:58:18.399612 | orchestrator | Tuesday 13 January 2026 00:55:37 +0000 (0:00:00.390) 0:00:03.509 ******* 2026-01-13 00:58:18.399630 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 00:58:18.399644 | orchestrator | 2026-01-13 00:58:18.399651 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-13 00:58:18.399658 | orchestrator | Tuesday 13 January 2026 00:55:38 +0000 (0:00:00.738) 0:00:04.248 ******* 2026-01-13 00:58:18.399665 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:58:18.399671 | orchestrator | 2026-01-13 00:58:18.399678 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-13 00:58:18.399684 | orchestrator | Tuesday 13 January 2026 00:55:38 +0000 (0:00:00.499) 0:00:04.747 ******* 2026-01-13 00:58:18.399725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.399744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.399753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.399760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-13 00:58:18.399768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-13 00:58:18.399775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-13 00:58:18.399796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.399807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.399815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.399822 | orchestrator | 2026-01-13 00:58:18.399829 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-13 00:58:18.399836 | orchestrator | Tuesday 13 January 2026 00:55:42 +0000 (0:00:03.370) 0:00:08.118 ******* 2026-01-13 00:58:18.399843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-13 00:58:18.399850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:58:18.399862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:58:18.399869 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.399884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-13 00:58:18.399892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:58:18.399898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:58:18.399905 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:58:18.399912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-13 00:58:18.399928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:58:18.399939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:58:18.399946 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:58:18.399953 | orchestrator | 2026-01-13 00:58:18.399959 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-13 00:58:18.399966 | orchestrator | Tuesday 13 January 2026 00:55:42 +0000 (0:00:00.642) 0:00:08.760 ******* 2026-01-13 00:58:18.399976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-13 00:58:18.399983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:58:18.399990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:58:18.399996 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.400004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-13 00:58:18.400020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:58:18.400031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:58:18.400037 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:58:18.400044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-13 00:58:18.400052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:58:18.400059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:58:18.400071 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:58:18.400078 | orchestrator | 2026-01-13 00:58:18.400085 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-13 00:58:18.400091 | orchestrator | Tuesday 13 January 2026 00:55:43 +0000 (0:00:00.639) 0:00:09.400 ******* 2026-01-13 00:58:18.400103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.400115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.400123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.400130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-13 00:58:18.400141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-13 00:58:18.400152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-13 00:58:18.400163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.400170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.400176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.400183 | orchestrator | 2026-01-13 00:58:18.400190 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-13 00:58:18.400196 | orchestrator | Tuesday 13 January 2026 00:55:46 +0000 (0:00:03.447) 0:00:12.848 ******* 2026-01-13 00:58:18.400203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.400214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:58:18.400227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.400239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:58:18.400246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.400257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:58:18.400263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.400274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.400282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.400289 | orchestrator | 2026-01-13 00:58:18.400295 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-13 00:58:18.400304 | orchestrator | Tuesday 13 January 2026 00:55:52 +0000 (0:00:05.521) 0:00:18.370 ******* 2026-01-13 00:58:18.400311 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:58:18.400317 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:58:18.400324 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:58:18.400330 | orchestrator | 2026-01-13 00:58:18.400337 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-13 00:58:18.400343 | orchestrator | Tuesday 13 January 2026 00:55:53 +0000 (0:00:01.428) 0:00:19.798 ******* 2026-01-13 00:58:18.400349 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.400356 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:58:18.400362 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:58:18.400368 | orchestrator | 2026-01-13 00:58:18.400374 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-13 00:58:18.400402 | orchestrator | Tuesday 13 January 2026 00:55:54 +0000 (0:00:00.599) 0:00:20.398 ******* 2026-01-13 00:58:18.400409 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.400415 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:58:18.400422 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:58:18.400433 | orchestrator | 2026-01-13 00:58:18.400439 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-13 00:58:18.400446 | orchestrator | Tuesday 13 January 2026 00:55:54 +0000 (0:00:00.284) 0:00:20.682 ******* 2026-01-13 00:58:18.400452 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.400458 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:58:18.400464 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:58:18.400471 | orchestrator | 2026-01-13 00:58:18.400479 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-13 00:58:18.400489 | orchestrator | Tuesday 13 January 2026 00:55:55 +0000 (0:00:00.595) 0:00:21.278 ******* 2026-01-13 00:58:18.400500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-13 00:58:18.400511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:58:18.400544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:58:18.400558 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.400573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-13 00:58:18.400591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:58:18.400601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:58:18.400611 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:58:18.400622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-13 00:58:18.400633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-13 00:58:18.400650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-13 00:58:18.400661 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:58:18.400671 | orchestrator | 2026-01-13 00:58:18.400682 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-13 00:58:18.400691 | orchestrator | Tuesday 13 January 2026 00:55:55 +0000 (0:00:00.568) 0:00:21.847 ******* 2026-01-13 00:58:18.400710 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.400727 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:58:18.400737 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:58:18.400747 | orchestrator | 2026-01-13 00:58:18.400757 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-13 00:58:18.400767 | orchestrator | Tuesday 13 January 2026 00:55:56 +0000 (0:00:00.301) 0:00:22.148 ******* 2026-01-13 00:58:18.400776 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-13 00:58:18.400782 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-13 00:58:18.400789 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-13 00:58:18.400795 | orchestrator | 2026-01-13 00:58:18.400801 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-13 00:58:18.400807 | orchestrator | Tuesday 13 January 2026 00:55:57 +0000 (0:00:01.800) 0:00:23.949 ******* 2026-01-13 00:58:18.400814 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 00:58:18.400820 | orchestrator | 2026-01-13 00:58:18.400826 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-13 00:58:18.400833 | orchestrator | Tuesday 13 January 2026 00:55:58 +0000 (0:00:00.900) 0:00:24.849 ******* 2026-01-13 00:58:18.400839 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.400845 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:58:18.400852 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:58:18.400858 | orchestrator | 2026-01-13 00:58:18.400864 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-13 00:58:18.400871 | orchestrator | Tuesday 13 January 2026 00:55:59 +0000 (0:00:00.852) 0:00:25.701 ******* 2026-01-13 00:58:18.400877 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-13 00:58:18.400883 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-13 00:58:18.400890 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 00:58:18.400896 | orchestrator | 2026-01-13 00:58:18.400902 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-13 00:58:18.400909 | orchestrator | Tuesday 13 January 2026 00:56:00 +0000 (0:00:01.059) 0:00:26.761 ******* 2026-01-13 00:58:18.400915 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:58:18.400922 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:58:18.400928 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:58:18.400934 | orchestrator | 2026-01-13 00:58:18.400940 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-13 00:58:18.400947 | orchestrator | Tuesday 13 January 2026 00:56:01 +0000 (0:00:00.296) 0:00:27.058 ******* 2026-01-13 00:58:18.400953 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-13 00:58:18.400959 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-13 00:58:18.400965 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-13 00:58:18.400971 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-13 00:58:18.400977 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-13 00:58:18.400984 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-13 00:58:18.400990 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-13 00:58:18.400997 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-13 00:58:18.401004 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-13 00:58:18.401010 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-13 00:58:18.401016 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-13 00:58:18.401028 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-13 00:58:18.401034 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-13 00:58:18.401040 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-13 00:58:18.401051 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-13 00:58:18.401058 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-13 00:58:18.401065 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-13 00:58:18.401071 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-13 00:58:18.401078 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-13 00:58:18.401084 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-13 00:58:18.401091 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-13 00:58:18.401097 | orchestrator | 2026-01-13 00:58:18.401103 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-13 00:58:18.401109 | orchestrator | Tuesday 13 January 2026 00:56:09 +0000 (0:00:08.632) 0:00:35.690 ******* 2026-01-13 00:58:18.401119 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-13 00:58:18.401126 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-13 00:58:18.401132 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-13 00:58:18.401139 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-13 00:58:18.401145 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-13 00:58:18.401151 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-13 00:58:18.401157 | orchestrator | 2026-01-13 00:58:18.401164 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-13 00:58:18.401170 | orchestrator | Tuesday 13 January 2026 00:56:12 +0000 (0:00:03.106) 0:00:38.797 ******* 2026-01-13 00:58:18.401178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.401186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.401204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-13 00:58:18.401215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-13 00:58:18.401222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-13 00:58:18.401229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-13 00:58:18.401235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.401247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.401254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-13 00:58:18.401260 | orchestrator | 2026-01-13 00:58:18.401270 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-13 00:58:18.401277 | orchestrator | Tuesday 13 January 2026 00:56:15 +0000 (0:00:02.376) 0:00:41.173 ******* 2026-01-13 00:58:18.401284 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.401290 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:58:18.401297 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:58:18.401303 | orchestrator | 2026-01-13 00:58:18.401309 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-13 00:58:18.401315 | orchestrator | Tuesday 13 January 2026 00:56:15 +0000 (0:00:00.294) 0:00:41.467 ******* 2026-01-13 00:58:18.401322 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:58:18.401328 | orchestrator | 2026-01-13 00:58:18.401334 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-13 00:58:18.401341 | orchestrator | Tuesday 13 January 2026 00:56:17 +0000 (0:00:02.494) 0:00:43.961 ******* 2026-01-13 00:58:18.401347 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:58:18.401353 | orchestrator | 2026-01-13 00:58:18.401360 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-13 00:58:18.401369 | orchestrator | Tuesday 13 January 2026 00:56:20 +0000 (0:00:02.557) 0:00:46.519 ******* 2026-01-13 00:58:18.401375 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:58:18.401404 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:58:18.401416 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:58:18.401428 | orchestrator | 2026-01-13 00:58:18.401434 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-13 00:58:18.401441 | orchestrator | Tuesday 13 January 2026 00:56:21 +0000 (0:00:01.208) 0:00:47.727 ******* 2026-01-13 00:58:18.401447 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:58:18.401453 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:58:18.401460 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:58:18.401466 | orchestrator | 2026-01-13 00:58:18.401472 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-13 00:58:18.401479 | orchestrator | Tuesday 13 January 2026 00:56:22 +0000 (0:00:00.305) 0:00:48.033 ******* 2026-01-13 00:58:18.401485 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.401491 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:58:18.401497 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:58:18.401504 | orchestrator | 2026-01-13 00:58:18.401510 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-13 00:58:18.401516 | orchestrator | Tuesday 13 January 2026 00:56:22 +0000 (0:00:00.347) 0:00:48.381 ******* 2026-01-13 00:58:18.401522 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:58:18.401534 | orchestrator | 2026-01-13 00:58:18.401541 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-01-13 00:58:18.401547 | orchestrator | Tuesday 13 January 2026 00:56:36 +0000 (0:00:14.013) 0:01:02.394 ******* 2026-01-13 00:58:18.401553 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:58:18.401560 | orchestrator | 2026-01-13 00:58:18.401566 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-13 00:58:18.401572 | orchestrator | Tuesday 13 January 2026 00:56:47 +0000 (0:00:11.244) 0:01:13.638 ******* 2026-01-13 00:58:18.401579 | orchestrator | 2026-01-13 00:58:18.401585 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-13 00:58:18.401591 | orchestrator | Tuesday 13 January 2026 00:56:47 +0000 (0:00:00.065) 0:01:13.704 ******* 2026-01-13 00:58:18.401598 | orchestrator | 2026-01-13 00:58:18.401605 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-13 00:58:18.401611 | orchestrator | Tuesday 13 January 2026 00:56:47 +0000 (0:00:00.077) 0:01:13.781 ******* 2026-01-13 00:58:18.401617 | orchestrator | 2026-01-13 00:58:18.401623 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-01-13 00:58:18.401630 | orchestrator | Tuesday 13 January 2026 00:56:47 +0000 (0:00:00.066) 0:01:13.848 ******* 2026-01-13 00:58:18.401636 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:58:18.401642 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:58:18.401649 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:58:18.401656 | orchestrator | 2026-01-13 00:58:18.401662 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-01-13 00:58:18.401669 | orchestrator | Tuesday 13 January 2026 00:57:03 +0000 (0:00:15.296) 0:01:29.145 ******* 2026-01-13 00:58:18.401675 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:58:18.401681 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:58:18.401687 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:58:18.401693 | orchestrator | 2026-01-13 00:58:18.401700 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-01-13 00:58:18.401706 | orchestrator | Tuesday 13 January 2026 00:57:08 +0000 (0:00:05.427) 0:01:34.572 ******* 2026-01-13 00:58:18.401713 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:58:18.401719 | orchestrator | changed: [testbed-node-1] 2026-01-13 00:58:18.401725 | orchestrator | changed: [testbed-node-2] 2026-01-13 00:58:18.401731 | orchestrator | 2026-01-13 00:58:18.401738 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-13 00:58:18.401744 | orchestrator | Tuesday 13 January 2026 00:57:19 +0000 (0:00:10.936) 0:01:45.509 ******* 2026-01-13 00:58:18.401750 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 00:58:18.401757 | orchestrator | 2026-01-13 00:58:18.401764 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-01-13 00:58:18.401770 | orchestrator | Tuesday 13 January 2026 00:57:20 +0000 (0:00:00.798) 0:01:46.307 ******* 2026-01-13 00:58:18.401776 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:58:18.401783 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:58:18.401789 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:58:18.401795 | orchestrator | 2026-01-13 00:58:18.401801 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-01-13 00:58:18.401808 | orchestrator | Tuesday 13 January 2026 00:57:21 +0000 (0:00:00.812) 0:01:47.120 ******* 2026-01-13 00:58:18.401814 | orchestrator | changed: [testbed-node-0] 2026-01-13 00:58:18.401820 | orchestrator | 2026-01-13 00:58:18.401827 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-01-13 00:58:18.401834 | orchestrator | Tuesday 13 January 2026 00:57:22 +0000 (0:00:01.571) 0:01:48.691 ******* 2026-01-13 00:58:18.401845 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-01-13 00:58:18.401852 | orchestrator | 2026-01-13 00:58:18.401858 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-01-13 00:58:18.401865 | orchestrator | Tuesday 13 January 2026 00:57:34 +0000 (0:00:12.222) 0:02:00.913 ******* 2026-01-13 00:58:18.401876 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-01-13 00:58:18.401883 | orchestrator | 2026-01-13 00:58:18.401889 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-01-13 00:58:18.401895 | orchestrator | Tuesday 13 January 2026 00:58:04 +0000 (0:00:29.303) 0:02:30.217 ******* 2026-01-13 00:58:18.401902 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-01-13 00:58:18.401908 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-01-13 00:58:18.401914 | orchestrator | 2026-01-13 00:58:18.401921 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-01-13 00:58:18.401927 | orchestrator | Tuesday 13 January 2026 00:58:10 +0000 (0:00:06.295) 0:02:36.512 ******* 2026-01-13 00:58:18.401937 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.401944 | orchestrator | 2026-01-13 00:58:18.401951 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-01-13 00:58:18.401957 | orchestrator | Tuesday 13 January 2026 00:58:10 +0000 (0:00:00.127) 0:02:36.640 ******* 2026-01-13 00:58:18.401963 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.401970 | orchestrator | 2026-01-13 00:58:18.401976 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-01-13 00:58:18.401982 | orchestrator | Tuesday 13 January 2026 00:58:10 +0000 (0:00:00.109) 0:02:36.749 ******* 2026-01-13 00:58:18.401988 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.401995 | orchestrator | 2026-01-13 00:58:18.402001 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-01-13 00:58:18.402007 | orchestrator | Tuesday 13 January 2026 00:58:10 +0000 (0:00:00.136) 0:02:36.885 ******* 2026-01-13 00:58:18.402049 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.402058 | orchestrator | 2026-01-13 00:58:18.402065 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-01-13 00:58:18.402071 | orchestrator | Tuesday 13 January 2026 00:58:11 +0000 (0:00:00.830) 0:02:37.716 ******* 2026-01-13 00:58:18.402078 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:58:18.402084 | orchestrator | 2026-01-13 00:58:18.402091 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-13 00:58:18.402097 | orchestrator | Tuesday 13 January 2026 00:58:15 +0000 (0:00:03.662) 0:02:41.378 ******* 2026-01-13 00:58:18.402104 | orchestrator | skipping: [testbed-node-0] 2026-01-13 00:58:18.402110 | orchestrator | skipping: [testbed-node-1] 2026-01-13 00:58:18.402116 | orchestrator | skipping: [testbed-node-2] 2026-01-13 00:58:18.402123 | orchestrator | 2026-01-13 00:58:18.402129 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:58:18.402136 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-13 00:58:18.402144 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-13 00:58:18.402150 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-13 00:58:18.402157 | orchestrator | 2026-01-13 00:58:18.402164 | orchestrator | 2026-01-13 00:58:18.402170 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:58:18.402177 | orchestrator | Tuesday 13 January 2026 00:58:15 +0000 (0:00:00.409) 0:02:41.788 ******* 2026-01-13 00:58:18.402183 | orchestrator | =============================================================================== 2026-01-13 00:58:18.402190 | orchestrator | service-ks-register : keystone | Creating services --------------------- 29.30s 2026-01-13 00:58:18.402196 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 15.30s 2026-01-13 00:58:18.402203 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.01s 2026-01-13 00:58:18.402215 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.22s 2026-01-13 00:58:18.402222 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.24s 2026-01-13 00:58:18.402228 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.94s 2026-01-13 00:58:18.402235 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.63s 2026-01-13 00:58:18.402241 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.30s 2026-01-13 00:58:18.402248 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.52s 2026-01-13 00:58:18.402254 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.43s 2026-01-13 00:58:18.402260 | orchestrator | keystone : Creating default user role ----------------------------------- 3.66s 2026-01-13 00:58:18.402267 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.45s 2026-01-13 00:58:18.402273 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.37s 2026-01-13 00:58:18.402279 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.11s 2026-01-13 00:58:18.402285 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.56s 2026-01-13 00:58:18.402291 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.49s 2026-01-13 00:58:18.402298 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.38s 2026-01-13 00:58:18.402309 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.80s 2026-01-13 00:58:18.402316 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.71s 2026-01-13 00:58:18.402322 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.57s 2026-01-13 00:58:18.402328 | orchestrator | 2026-01-13 00:58:18 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:58:18.402334 | orchestrator | 2026-01-13 00:58:18 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:58:18.402341 | orchestrator | 2026-01-13 00:58:18 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:58:18.402347 | orchestrator | 2026-01-13 00:58:18 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:58:18.403092 | orchestrator | 2026-01-13 00:58:18 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:18.403249 | orchestrator | 2026-01-13 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:21.432904 | orchestrator | 2026-01-13 00:58:21 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:58:21.433934 | orchestrator | 2026-01-13 00:58:21 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:58:21.438278 | orchestrator | 2026-01-13 00:58:21 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:58:21.438838 | orchestrator | 2026-01-13 00:58:21 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:58:21.440013 | orchestrator | 2026-01-13 00:58:21 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:21.440045 | orchestrator | 2026-01-13 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:24.478886 | orchestrator | 2026-01-13 00:58:24 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:58:24.481465 | orchestrator | 2026-01-13 00:58:24 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:58:24.483556 | orchestrator | 2026-01-13 00:58:24 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:58:24.485637 | orchestrator | 2026-01-13 00:58:24 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:58:24.488061 | orchestrator | 2026-01-13 00:58:24 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:24.488109 | orchestrator | 2026-01-13 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:27.545904 | orchestrator | 2026-01-13 00:58:27 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:58:27.548470 | orchestrator | 2026-01-13 00:58:27 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:58:27.551784 | orchestrator | 2026-01-13 00:58:27 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:58:27.554196 | orchestrator | 2026-01-13 00:58:27 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:58:27.555213 | orchestrator | 2026-01-13 00:58:27 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:27.555574 | orchestrator | 2026-01-13 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:30.606337 | orchestrator | 2026-01-13 00:58:30 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:58:30.608474 | orchestrator | 2026-01-13 00:58:30 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:58:30.611204 | orchestrator | 2026-01-13 00:58:30 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:58:30.613714 | orchestrator | 2026-01-13 00:58:30 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:58:30.616665 | orchestrator | 2026-01-13 00:58:30 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:30.616726 | orchestrator | 2026-01-13 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:33.653992 | orchestrator | 2026-01-13 00:58:33 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:58:33.654888 | orchestrator | 2026-01-13 00:58:33 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:58:33.656025 | orchestrator | 2026-01-13 00:58:33 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:58:33.656959 | orchestrator | 2026-01-13 00:58:33 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:58:33.658261 | orchestrator | 2026-01-13 00:58:33 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:33.658301 | orchestrator | 2026-01-13 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:36.702174 | orchestrator | 2026-01-13 00:58:36 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:58:36.704948 | orchestrator | 2026-01-13 00:58:36 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:58:36.707282 | orchestrator | 2026-01-13 00:58:36 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:58:36.709466 | orchestrator | 2026-01-13 00:58:36 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:58:36.710900 | orchestrator | 2026-01-13 00:58:36 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:36.710936 | orchestrator | 2026-01-13 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:39.754090 | orchestrator | 2026-01-13 00:58:39 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:58:39.755644 | orchestrator | 2026-01-13 00:58:39 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:58:39.757390 | orchestrator | 2026-01-13 00:58:39 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:58:39.760485 | orchestrator | 2026-01-13 00:58:39 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:58:39.762283 | orchestrator | 2026-01-13 00:58:39 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:39.762321 | orchestrator | 2026-01-13 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:42.814666 | orchestrator | 2026-01-13 00:58:42 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:58:42.815695 | orchestrator | 2026-01-13 00:58:42 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:58:42.816549 | orchestrator | 2026-01-13 00:58:42 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:58:42.818259 | orchestrator | 2026-01-13 00:58:42 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:58:42.819715 | orchestrator | 2026-01-13 00:58:42 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:42.819758 | orchestrator | 2026-01-13 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:45.869605 | orchestrator | 2026-01-13 00:58:45 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:58:45.870707 | orchestrator | 2026-01-13 00:58:45 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:58:45.872354 | orchestrator | 2026-01-13 00:58:45 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:58:45.873660 | orchestrator | 2026-01-13 00:58:45 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:58:45.875124 | orchestrator | 2026-01-13 00:58:45 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:45.875165 | orchestrator | 2026-01-13 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:48.921677 | orchestrator | 2026-01-13 00:58:48 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:58:48.925578 | orchestrator | 2026-01-13 00:58:48 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:58:48.927951 | orchestrator | 2026-01-13 00:58:48 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:58:48.929794 | orchestrator | 2026-01-13 00:58:48 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:58:48.931372 | orchestrator | 2026-01-13 00:58:48 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:48.931410 | orchestrator | 2026-01-13 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:52.034425 | orchestrator | 2026-01-13 00:58:52 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:58:52.037978 | orchestrator | 2026-01-13 00:58:52 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:58:52.041029 | orchestrator | 2026-01-13 00:58:52 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:58:52.041087 | orchestrator | 2026-01-13 00:58:52 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:58:52.042852 | orchestrator | 2026-01-13 00:58:52 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state STARTED 2026-01-13 00:58:52.043178 | orchestrator | 2026-01-13 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:55.145555 | orchestrator | 2026-01-13 00:58:55 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:58:55.145661 | orchestrator | 2026-01-13 00:58:55 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:58:55.148049 | orchestrator | 2026-01-13 00:58:55 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:58:55.149016 | orchestrator | 2026-01-13 00:58:55 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:58:55.150908 | orchestrator | 2026-01-13 00:58:55 | INFO  | Task 65f5c828-773f-4884-becd-75772195a52b is in state SUCCESS 2026-01-13 00:58:55.150955 | orchestrator | 2026-01-13 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:58:58.215477 | orchestrator | 2026-01-13 00:58:58 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:58:58.217654 | orchestrator | 2026-01-13 00:58:58 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:58:58.218768 | orchestrator | 2026-01-13 00:58:58 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:58:58.220656 | orchestrator | 2026-01-13 00:58:58 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:58:58.222633 | orchestrator | 2026-01-13 00:58:58 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:58:58.222702 | orchestrator | 2026-01-13 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:01.333825 | orchestrator | 2026-01-13 00:59:01 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:01.333901 | orchestrator | 2026-01-13 00:59:01 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:59:01.333908 | orchestrator | 2026-01-13 00:59:01 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:01.333913 | orchestrator | 2026-01-13 00:59:01 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:01.333917 | orchestrator | 2026-01-13 00:59:01 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:01.333922 | orchestrator | 2026-01-13 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:04.305494 | orchestrator | 2026-01-13 00:59:04 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:04.305547 | orchestrator | 2026-01-13 00:59:04 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:59:04.306260 | orchestrator | 2026-01-13 00:59:04 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:04.307009 | orchestrator | 2026-01-13 00:59:04 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:04.307931 | orchestrator | 2026-01-13 00:59:04 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:04.308008 | orchestrator | 2026-01-13 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:07.374575 | orchestrator | 2026-01-13 00:59:07 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:07.374639 | orchestrator | 2026-01-13 00:59:07 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:59:07.374650 | orchestrator | 2026-01-13 00:59:07 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:07.374659 | orchestrator | 2026-01-13 00:59:07 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:07.374686 | orchestrator | 2026-01-13 00:59:07 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:07.374696 | orchestrator | 2026-01-13 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:10.387645 | orchestrator | 2026-01-13 00:59:10 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:10.387721 | orchestrator | 2026-01-13 00:59:10 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:59:10.388493 | orchestrator | 2026-01-13 00:59:10 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:10.389161 | orchestrator | 2026-01-13 00:59:10 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:10.390396 | orchestrator | 2026-01-13 00:59:10 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:10.390430 | orchestrator | 2026-01-13 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:13.416499 | orchestrator | 2026-01-13 00:59:13 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:13.417945 | orchestrator | 2026-01-13 00:59:13 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:59:13.418921 | orchestrator | 2026-01-13 00:59:13 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:13.421286 | orchestrator | 2026-01-13 00:59:13 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:13.422133 | orchestrator | 2026-01-13 00:59:13 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:13.422178 | orchestrator | 2026-01-13 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:16.450690 | orchestrator | 2026-01-13 00:59:16 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:16.451804 | orchestrator | 2026-01-13 00:59:16 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:59:16.453502 | orchestrator | 2026-01-13 00:59:16 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:16.454178 | orchestrator | 2026-01-13 00:59:16 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:16.455600 | orchestrator | 2026-01-13 00:59:16 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:16.455633 | orchestrator | 2026-01-13 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:19.486830 | orchestrator | 2026-01-13 00:59:19 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:19.487405 | orchestrator | 2026-01-13 00:59:19 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:59:19.487764 | orchestrator | 2026-01-13 00:59:19 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:19.488614 | orchestrator | 2026-01-13 00:59:19 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:19.490176 | orchestrator | 2026-01-13 00:59:19 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:19.490237 | orchestrator | 2026-01-13 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:22.513289 | orchestrator | 2026-01-13 00:59:22 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:22.515130 | orchestrator | 2026-01-13 00:59:22 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state STARTED 2026-01-13 00:59:22.515739 | orchestrator | 2026-01-13 00:59:22 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:22.516326 | orchestrator | 2026-01-13 00:59:22 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:22.517044 | orchestrator | 2026-01-13 00:59:22 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:22.517094 | orchestrator | 2026-01-13 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:25.538386 | orchestrator | 2026-01-13 00:59:25 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:25.538546 | orchestrator | 2026-01-13 00:59:25 | INFO  | Task cc2828b6-ec68-4a65-b245-786e5c13977b is in state SUCCESS 2026-01-13 00:59:25.539017 | orchestrator | 2026-01-13 00:59:25.539041 | orchestrator | 2026-01-13 00:59:25.539046 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-13 00:59:25.539052 | orchestrator | 2026-01-13 00:59:25.539059 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-13 00:59:25.539068 | orchestrator | Tuesday 13 January 2026 00:58:03 +0000 (0:00:00.235) 0:00:00.235 ******* 2026-01-13 00:59:25.539077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-13 00:59:25.539085 | orchestrator | 2026-01-13 00:59:25.539091 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-13 00:59:25.539098 | orchestrator | Tuesday 13 January 2026 00:58:03 +0000 (0:00:00.240) 0:00:00.475 ******* 2026-01-13 00:59:25.539105 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-13 00:59:25.539112 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-13 00:59:25.539120 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-13 00:59:25.539126 | orchestrator | 2026-01-13 00:59:25.539133 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-13 00:59:25.539140 | orchestrator | Tuesday 13 January 2026 00:58:04 +0000 (0:00:01.330) 0:00:01.806 ******* 2026-01-13 00:59:25.539146 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-13 00:59:25.539153 | orchestrator | 2026-01-13 00:59:25.539159 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-13 00:59:25.539166 | orchestrator | Tuesday 13 January 2026 00:58:06 +0000 (0:00:01.525) 0:00:03.332 ******* 2026-01-13 00:59:25.539173 | orchestrator | changed: [testbed-manager] 2026-01-13 00:59:25.539180 | orchestrator | 2026-01-13 00:59:25.539187 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-13 00:59:25.539210 | orchestrator | Tuesday 13 January 2026 00:58:07 +0000 (0:00:00.952) 0:00:04.284 ******* 2026-01-13 00:59:25.539217 | orchestrator | changed: [testbed-manager] 2026-01-13 00:59:25.539224 | orchestrator | 2026-01-13 00:59:25.539231 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-13 00:59:25.539269 | orchestrator | Tuesday 13 January 2026 00:58:08 +0000 (0:00:00.943) 0:00:05.228 ******* 2026-01-13 00:59:25.539276 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-13 00:59:25.539282 | orchestrator | ok: [testbed-manager] 2026-01-13 00:59:25.539288 | orchestrator | 2026-01-13 00:59:25.539294 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-13 00:59:25.539301 | orchestrator | Tuesday 13 January 2026 00:58:43 +0000 (0:00:35.321) 0:00:40.549 ******* 2026-01-13 00:59:25.539306 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-13 00:59:25.539313 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-13 00:59:25.539319 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-13 00:59:25.539325 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-13 00:59:25.539331 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-13 00:59:25.539355 | orchestrator | 2026-01-13 00:59:25.539361 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-13 00:59:25.539367 | orchestrator | Tuesday 13 January 2026 00:58:47 +0000 (0:00:04.039) 0:00:44.589 ******* 2026-01-13 00:59:25.539373 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-13 00:59:25.539380 | orchestrator | 2026-01-13 00:59:25.539385 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-13 00:59:25.539392 | orchestrator | Tuesday 13 January 2026 00:58:48 +0000 (0:00:00.541) 0:00:45.130 ******* 2026-01-13 00:59:25.539398 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:59:25.539404 | orchestrator | 2026-01-13 00:59:25.539410 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-13 00:59:25.539416 | orchestrator | Tuesday 13 January 2026 00:58:48 +0000 (0:00:00.132) 0:00:45.263 ******* 2026-01-13 00:59:25.539422 | orchestrator | skipping: [testbed-manager] 2026-01-13 00:59:25.539429 | orchestrator | 2026-01-13 00:59:25.539436 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-13 00:59:25.539442 | orchestrator | Tuesday 13 January 2026 00:58:48 +0000 (0:00:00.577) 0:00:45.841 ******* 2026-01-13 00:59:25.539449 | orchestrator | changed: [testbed-manager] 2026-01-13 00:59:25.539456 | orchestrator | 2026-01-13 00:59:25.539462 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-13 00:59:25.539468 | orchestrator | Tuesday 13 January 2026 00:58:51 +0000 (0:00:02.034) 0:00:47.876 ******* 2026-01-13 00:59:25.539474 | orchestrator | changed: [testbed-manager] 2026-01-13 00:59:25.539480 | orchestrator | 2026-01-13 00:59:25.539485 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-13 00:59:25.539492 | orchestrator | Tuesday 13 January 2026 00:58:51 +0000 (0:00:00.754) 0:00:48.630 ******* 2026-01-13 00:59:25.539498 | orchestrator | changed: [testbed-manager] 2026-01-13 00:59:25.539504 | orchestrator | 2026-01-13 00:59:25.539510 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-13 00:59:25.539516 | orchestrator | Tuesday 13 January 2026 00:58:52 +0000 (0:00:00.727) 0:00:49.357 ******* 2026-01-13 00:59:25.539522 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-13 00:59:25.539529 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-13 00:59:25.539535 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-13 00:59:25.539542 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-13 00:59:25.539548 | orchestrator | 2026-01-13 00:59:25.539554 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:59:25.539561 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 00:59:25.539569 | orchestrator | 2026-01-13 00:59:25.539576 | orchestrator | 2026-01-13 00:59:25.539592 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:59:25.539600 | orchestrator | Tuesday 13 January 2026 00:58:54 +0000 (0:00:01.628) 0:00:50.986 ******* 2026-01-13 00:59:25.539605 | orchestrator | =============================================================================== 2026-01-13 00:59:25.539611 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.32s 2026-01-13 00:59:25.539617 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.04s 2026-01-13 00:59:25.539624 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.03s 2026-01-13 00:59:25.539713 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.63s 2026-01-13 00:59:25.539723 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.53s 2026-01-13 00:59:25.539730 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.33s 2026-01-13 00:59:25.539737 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2026-01-13 00:59:25.539744 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.94s 2026-01-13 00:59:25.539760 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.75s 2026-01-13 00:59:25.539767 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.73s 2026-01-13 00:59:25.539774 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.58s 2026-01-13 00:59:25.539780 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.54s 2026-01-13 00:59:25.539787 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-01-13 00:59:25.539793 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-01-13 00:59:25.539800 | orchestrator | 2026-01-13 00:59:25.539807 | orchestrator | 2026-01-13 00:59:25.539814 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-13 00:59:25.539821 | orchestrator | 2026-01-13 00:59:25.539834 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-13 00:59:25.539841 | orchestrator | Tuesday 13 January 2026 00:58:21 +0000 (0:00:00.212) 0:00:00.212 ******* 2026-01-13 00:59:25.539848 | orchestrator | changed: [localhost] 2026-01-13 00:59:25.539855 | orchestrator | 2026-01-13 00:59:25.539862 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-13 00:59:25.539869 | orchestrator | Tuesday 13 January 2026 00:58:22 +0000 (0:00:01.242) 0:00:01.455 ******* 2026-01-13 00:59:25.539875 | orchestrator | changed: [localhost] 2026-01-13 00:59:25.539883 | orchestrator | 2026-01-13 00:59:25.539889 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-13 00:59:25.539896 | orchestrator | Tuesday 13 January 2026 00:58:55 +0000 (0:00:32.735) 0:00:34.191 ******* 2026-01-13 00:59:25.539903 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-01-13 00:59:25.539909 | orchestrator | changed: [localhost] 2026-01-13 00:59:25.539916 | orchestrator | 2026-01-13 00:59:25.539922 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 00:59:25.539929 | orchestrator | 2026-01-13 00:59:25.539936 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 00:59:25.539942 | orchestrator | Tuesday 13 January 2026 00:59:22 +0000 (0:00:27.198) 0:01:01.389 ******* 2026-01-13 00:59:25.539949 | orchestrator | ok: [testbed-node-0] 2026-01-13 00:59:25.539955 | orchestrator | ok: [testbed-node-1] 2026-01-13 00:59:25.539961 | orchestrator | ok: [testbed-node-2] 2026-01-13 00:59:25.539968 | orchestrator | 2026-01-13 00:59:25.539975 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 00:59:25.539982 | orchestrator | Tuesday 13 January 2026 00:59:23 +0000 (0:00:00.699) 0:01:02.088 ******* 2026-01-13 00:59:25.539989 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-13 00:59:25.539996 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-13 00:59:25.540002 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-13 00:59:25.540009 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-13 00:59:25.540016 | orchestrator | 2026-01-13 00:59:25.540023 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-13 00:59:25.540030 | orchestrator | skipping: no hosts matched 2026-01-13 00:59:25.540036 | orchestrator | 2026-01-13 00:59:25.540042 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 00:59:25.540049 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:59:25.540056 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:59:25.540064 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:59:25.540071 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 00:59:25.540083 | orchestrator | 2026-01-13 00:59:25.540090 | orchestrator | 2026-01-13 00:59:25.540096 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 00:59:25.540103 | orchestrator | Tuesday 13 January 2026 00:59:24 +0000 (0:00:00.985) 0:01:03.073 ******* 2026-01-13 00:59:25.540110 | orchestrator | =============================================================================== 2026-01-13 00:59:25.540116 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 32.74s 2026-01-13 00:59:25.540122 | orchestrator | Download ironic-agent kernel ------------------------------------------- 27.20s 2026-01-13 00:59:25.540129 | orchestrator | Ensure the destination directory exists --------------------------------- 1.24s 2026-01-13 00:59:25.540142 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.99s 2026-01-13 00:59:25.540149 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.70s 2026-01-13 00:59:25.540156 | orchestrator | 2026-01-13 00:59:25 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:25.541084 | orchestrator | 2026-01-13 00:59:25 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:25.542623 | orchestrator | 2026-01-13 00:59:25 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:25.543491 | orchestrator | 2026-01-13 00:59:25 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 00:59:25.543548 | orchestrator | 2026-01-13 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:28.567987 | orchestrator | 2026-01-13 00:59:28 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:28.568226 | orchestrator | 2026-01-13 00:59:28 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:28.569298 | orchestrator | 2026-01-13 00:59:28 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:28.569898 | orchestrator | 2026-01-13 00:59:28 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:28.571066 | orchestrator | 2026-01-13 00:59:28 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 00:59:28.571127 | orchestrator | 2026-01-13 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:31.599417 | orchestrator | 2026-01-13 00:59:31 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:31.599601 | orchestrator | 2026-01-13 00:59:31 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:31.600160 | orchestrator | 2026-01-13 00:59:31 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:31.601999 | orchestrator | 2026-01-13 00:59:31 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:31.602909 | orchestrator | 2026-01-13 00:59:31 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 00:59:31.602949 | orchestrator | 2026-01-13 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:34.633794 | orchestrator | 2026-01-13 00:59:34 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:34.633952 | orchestrator | 2026-01-13 00:59:34 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:34.634732 | orchestrator | 2026-01-13 00:59:34 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:34.635469 | orchestrator | 2026-01-13 00:59:34 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:34.636156 | orchestrator | 2026-01-13 00:59:34 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 00:59:34.636353 | orchestrator | 2026-01-13 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:37.661870 | orchestrator | 2026-01-13 00:59:37 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:37.661962 | orchestrator | 2026-01-13 00:59:37 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:37.662722 | orchestrator | 2026-01-13 00:59:37 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:37.663445 | orchestrator | 2026-01-13 00:59:37 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:37.664562 | orchestrator | 2026-01-13 00:59:37 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 00:59:37.664595 | orchestrator | 2026-01-13 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:40.700477 | orchestrator | 2026-01-13 00:59:40 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:40.701626 | orchestrator | 2026-01-13 00:59:40 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:40.702130 | orchestrator | 2026-01-13 00:59:40 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:40.702748 | orchestrator | 2026-01-13 00:59:40 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:40.703259 | orchestrator | 2026-01-13 00:59:40 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 00:59:40.703305 | orchestrator | 2026-01-13 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:43.739815 | orchestrator | 2026-01-13 00:59:43 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:43.740350 | orchestrator | 2026-01-13 00:59:43 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:43.741101 | orchestrator | 2026-01-13 00:59:43 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:43.741823 | orchestrator | 2026-01-13 00:59:43 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:43.742375 | orchestrator | 2026-01-13 00:59:43 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 00:59:43.742442 | orchestrator | 2026-01-13 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:46.776874 | orchestrator | 2026-01-13 00:59:46 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:46.777641 | orchestrator | 2026-01-13 00:59:46 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:46.778689 | orchestrator | 2026-01-13 00:59:46 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:46.779713 | orchestrator | 2026-01-13 00:59:46 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:46.780631 | orchestrator | 2026-01-13 00:59:46 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 00:59:46.780685 | orchestrator | 2026-01-13 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:49.820467 | orchestrator | 2026-01-13 00:59:49 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:49.823692 | orchestrator | 2026-01-13 00:59:49 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:49.823896 | orchestrator | 2026-01-13 00:59:49 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:49.824922 | orchestrator | 2026-01-13 00:59:49 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:49.828453 | orchestrator | 2026-01-13 00:59:49 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 00:59:49.828515 | orchestrator | 2026-01-13 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:52.871315 | orchestrator | 2026-01-13 00:59:52 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:52.872668 | orchestrator | 2026-01-13 00:59:52 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:52.874821 | orchestrator | 2026-01-13 00:59:52 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:52.876176 | orchestrator | 2026-01-13 00:59:52 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:52.877904 | orchestrator | 2026-01-13 00:59:52 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 00:59:52.877950 | orchestrator | 2026-01-13 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:55.910794 | orchestrator | 2026-01-13 00:59:55 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:55.911623 | orchestrator | 2026-01-13 00:59:55 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:55.911864 | orchestrator | 2026-01-13 00:59:55 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:55.913029 | orchestrator | 2026-01-13 00:59:55 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:55.913855 | orchestrator | 2026-01-13 00:59:55 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 00:59:55.913894 | orchestrator | 2026-01-13 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-13 00:59:58.950897 | orchestrator | 2026-01-13 00:59:58 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 00:59:58.955678 | orchestrator | 2026-01-13 00:59:58 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 00:59:58.957890 | orchestrator | 2026-01-13 00:59:58 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 00:59:58.959353 | orchestrator | 2026-01-13 00:59:58 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 00:59:58.961118 | orchestrator | 2026-01-13 00:59:58 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 00:59:58.961314 | orchestrator | 2026-01-13 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:01.990314 | orchestrator | 2026-01-13 01:00:01 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 01:00:01.990681 | orchestrator | 2026-01-13 01:00:01 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 01:00:01.991534 | orchestrator | 2026-01-13 01:00:01 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:01.992533 | orchestrator | 2026-01-13 01:00:01 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:01.994434 | orchestrator | 2026-01-13 01:00:01 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:01.994462 | orchestrator | 2026-01-13 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:05.035884 | orchestrator | 2026-01-13 01:00:05 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 01:00:05.036970 | orchestrator | 2026-01-13 01:00:05 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 01:00:05.038482 | orchestrator | 2026-01-13 01:00:05 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:05.038533 | orchestrator | 2026-01-13 01:00:05 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:05.039206 | orchestrator | 2026-01-13 01:00:05 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:05.039592 | orchestrator | 2026-01-13 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:08.071307 | orchestrator | 2026-01-13 01:00:08 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 01:00:08.071409 | orchestrator | 2026-01-13 01:00:08 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 01:00:08.072194 | orchestrator | 2026-01-13 01:00:08 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:08.072840 | orchestrator | 2026-01-13 01:00:08 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:08.073514 | orchestrator | 2026-01-13 01:00:08 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:08.073715 | orchestrator | 2026-01-13 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:11.104524 | orchestrator | 2026-01-13 01:00:11 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 01:00:11.104652 | orchestrator | 2026-01-13 01:00:11 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 01:00:11.105336 | orchestrator | 2026-01-13 01:00:11 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:11.105868 | orchestrator | 2026-01-13 01:00:11 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:11.106403 | orchestrator | 2026-01-13 01:00:11 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:11.106436 | orchestrator | 2026-01-13 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:14.128127 | orchestrator | 2026-01-13 01:00:14 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 01:00:14.128422 | orchestrator | 2026-01-13 01:00:14 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 01:00:14.128958 | orchestrator | 2026-01-13 01:00:14 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:14.131711 | orchestrator | 2026-01-13 01:00:14 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:14.132207 | orchestrator | 2026-01-13 01:00:14 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:14.132234 | orchestrator | 2026-01-13 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:17.163264 | orchestrator | 2026-01-13 01:00:17 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 01:00:17.163578 | orchestrator | 2026-01-13 01:00:17 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 01:00:17.164485 | orchestrator | 2026-01-13 01:00:17 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:17.165042 | orchestrator | 2026-01-13 01:00:17 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:17.165938 | orchestrator | 2026-01-13 01:00:17 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:17.165976 | orchestrator | 2026-01-13 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:20.199618 | orchestrator | 2026-01-13 01:00:20 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 01:00:20.200445 | orchestrator | 2026-01-13 01:00:20 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state STARTED 2026-01-13 01:00:20.201244 | orchestrator | 2026-01-13 01:00:20 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:20.202157 | orchestrator | 2026-01-13 01:00:20 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:20.202912 | orchestrator | 2026-01-13 01:00:20 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:20.203336 | orchestrator | 2026-01-13 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:23.233002 | orchestrator | 2026-01-13 01:00:23 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 01:00:23.234638 | orchestrator | 2026-01-13 01:00:23 | INFO  | Task b639ab41-f416-4eeb-9ecc-fa20fc59daf4 is in state SUCCESS 2026-01-13 01:00:23.235478 | orchestrator | 2026-01-13 01:00:23.235495 | orchestrator | 2026-01-13 01:00:23.235500 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:00:23.235504 | orchestrator | 2026-01-13 01:00:23.235508 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:00:23.235512 | orchestrator | Tuesday 13 January 2026 00:58:22 +0000 (0:00:00.346) 0:00:00.346 ******* 2026-01-13 01:00:23.235516 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:00:23.235521 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:00:23.235524 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:00:23.235528 | orchestrator | 2026-01-13 01:00:23.235532 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:00:23.235536 | orchestrator | Tuesday 13 January 2026 00:58:23 +0000 (0:00:00.445) 0:00:00.791 ******* 2026-01-13 01:00:23.235540 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-13 01:00:23.235544 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-13 01:00:23.235548 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-13 01:00:23.235552 | orchestrator | 2026-01-13 01:00:23.235555 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-13 01:00:23.235559 | orchestrator | 2026-01-13 01:00:23.235563 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-13 01:00:23.235567 | orchestrator | Tuesday 13 January 2026 00:58:23 +0000 (0:00:00.536) 0:00:01.328 ******* 2026-01-13 01:00:23.235571 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:00:23.235575 | orchestrator | 2026-01-13 01:00:23.235579 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-13 01:00:23.235582 | orchestrator | Tuesday 13 January 2026 00:58:24 +0000 (0:00:00.487) 0:00:01.816 ******* 2026-01-13 01:00:23.235586 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-01-13 01:00:23.235590 | orchestrator | 2026-01-13 01:00:23.235594 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-01-13 01:00:23.235598 | orchestrator | Tuesday 13 January 2026 00:58:28 +0000 (0:00:03.974) 0:00:05.791 ******* 2026-01-13 01:00:23.235601 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-01-13 01:00:23.235605 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-01-13 01:00:23.235609 | orchestrator | 2026-01-13 01:00:23.235613 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-01-13 01:00:23.235616 | orchestrator | Tuesday 13 January 2026 00:58:35 +0000 (0:00:07.737) 0:00:13.528 ******* 2026-01-13 01:00:23.235620 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-01-13 01:00:23.235636 | orchestrator | 2026-01-13 01:00:23.235640 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-01-13 01:00:23.235643 | orchestrator | Tuesday 13 January 2026 00:58:39 +0000 (0:00:03.390) 0:00:16.918 ******* 2026-01-13 01:00:23.235647 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-13 01:00:23.235651 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-01-13 01:00:23.235655 | orchestrator | 2026-01-13 01:00:23.235658 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-01-13 01:00:23.235662 | orchestrator | Tuesday 13 January 2026 00:58:43 +0000 (0:00:04.393) 0:00:21.312 ******* 2026-01-13 01:00:23.235666 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-13 01:00:23.235669 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-01-13 01:00:23.235673 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-01-13 01:00:23.235677 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-01-13 01:00:23.235680 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-01-13 01:00:23.235684 | orchestrator | 2026-01-13 01:00:23.235688 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-01-13 01:00:23.235692 | orchestrator | Tuesday 13 January 2026 00:59:01 +0000 (0:00:17.685) 0:00:38.997 ******* 2026-01-13 01:00:23.235800 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-01-13 01:00:23.235812 | orchestrator | 2026-01-13 01:00:23.235820 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-01-13 01:00:23.235827 | orchestrator | Tuesday 13 January 2026 00:59:06 +0000 (0:00:04.792) 0:00:43.789 ******* 2026-01-13 01:00:23.235837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.235859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.235865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.235874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.235891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.235895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.235903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.235910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.235914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.235921 | orchestrator | 2026-01-13 01:00:23.235925 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-01-13 01:00:23.235929 | orchestrator | Tuesday 13 January 2026 00:59:08 +0000 (0:00:02.354) 0:00:46.144 ******* 2026-01-13 01:00:23.235932 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-01-13 01:00:23.235936 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-01-13 01:00:23.235940 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-01-13 01:00:23.235948 | orchestrator | 2026-01-13 01:00:23.235952 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-01-13 01:00:23.235959 | orchestrator | Tuesday 13 January 2026 00:59:10 +0000 (0:00:01.742) 0:00:47.886 ******* 2026-01-13 01:00:23.235965 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:00:23.235971 | orchestrator | 2026-01-13 01:00:23.235976 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-01-13 01:00:23.235982 | orchestrator | Tuesday 13 January 2026 00:59:10 +0000 (0:00:00.102) 0:00:47.988 ******* 2026-01-13 01:00:23.235988 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:00:23.235994 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:00:23.236000 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:00:23.236006 | orchestrator | 2026-01-13 01:00:23.236012 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-13 01:00:23.236017 | orchestrator | Tuesday 13 January 2026 00:59:10 +0000 (0:00:00.376) 0:00:48.365 ******* 2026-01-13 01:00:23.236024 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:00:23.236030 | orchestrator | 2026-01-13 01:00:23.236036 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-01-13 01:00:23.236043 | orchestrator | Tuesday 13 January 2026 00:59:11 +0000 (0:00:00.452) 0:00:48.817 ******* 2026-01-13 01:00:23.236050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.236065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.236074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.236078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236110 | orchestrator | 2026-01-13 01:00:23.236115 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-01-13 01:00:23.236137 | orchestrator | Tuesday 13 January 2026 00:59:14 +0000 (0:00:03.018) 0:00:51.836 ******* 2026-01-13 01:00:23.236142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-13 01:00:23.236147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236157 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:00:23.236164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-13 01:00:23.236174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-13 01:00:23.236188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236192 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:00:23.236197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236201 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:00:23.236206 | orchestrator | 2026-01-13 01:00:23.236210 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-01-13 01:00:23.236217 | orchestrator | Tuesday 13 January 2026 00:59:15 +0000 (0:00:01.274) 0:00:53.111 ******* 2026-01-13 01:00:23.236226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-13 01:00:23.236231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236239 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:00:23.236243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-13 01:00:23.236247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236260 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:00:23.236269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-13 01:00:23.236273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236281 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:00:23.236285 | orchestrator | 2026-01-13 01:00:23.236288 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-01-13 01:00:23.236292 | orchestrator | Tuesday 13 January 2026 00:59:16 +0000 (0:00:01.434) 0:00:54.545 ******* 2026-01-13 01:00:23.236296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.236454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.236465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.236469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236502 | orchestrator | 2026-01-13 01:00:23.236506 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-01-13 01:00:23.236509 | orchestrator | Tuesday 13 January 2026 00:59:21 +0000 (0:00:04.191) 0:00:58.737 ******* 2026-01-13 01:00:23.236513 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:00:23.236517 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:00:23.236521 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:00:23.236525 | orchestrator | 2026-01-13 01:00:23.236528 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-01-13 01:00:23.236532 | orchestrator | Tuesday 13 January 2026 00:59:23 +0000 (0:00:02.499) 0:01:01.236 ******* 2026-01-13 01:00:23.236536 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 01:00:23.236540 | orchestrator | 2026-01-13 01:00:23.236544 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-01-13 01:00:23.236548 | orchestrator | Tuesday 13 January 2026 00:59:24 +0000 (0:00:01.097) 0:01:02.333 ******* 2026-01-13 01:00:23.236551 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:00:23.236555 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:00:23.236559 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:00:23.236563 | orchestrator | 2026-01-13 01:00:23.236566 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-01-13 01:00:23.236570 | orchestrator | Tuesday 13 January 2026 00:59:25 +0000 (0:00:00.662) 0:01:02.996 ******* 2026-01-13 01:00:23.236574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.236591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.236600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.236604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236630 | orchestrator | 2026-01-13 01:00:23.236634 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-01-13 01:00:23.236640 | orchestrator | Tuesday 13 January 2026 00:59:35 +0000 (0:00:10.273) 0:01:13.269 ******* 2026-01-13 01:00:23.236646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-13 01:00:23.236650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236660 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:00:23.236664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-13 01:00:23.236668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236680 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:00:23.236684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-13 01:00:23.236688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:00:23.236698 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:00:23.236702 | orchestrator | 2026-01-13 01:00:23.236706 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-01-13 01:00:23.236710 | orchestrator | Tuesday 13 January 2026 00:59:36 +0000 (0:00:01.211) 0:01:14.481 ******* 2026-01-13 01:00:23.236714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.236722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.236726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:23.236730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:00:23.236763 | orchestrator | 2026-01-13 01:00:23.236767 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-13 01:00:23.236771 | orchestrator | Tuesday 13 January 2026 00:59:40 +0000 (0:00:03.821) 0:01:18.302 ******* 2026-01-13 01:00:23.236775 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:00:23.236778 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:00:23.236784 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:00:23.236788 | orchestrator | 2026-01-13 01:00:23.236792 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-01-13 01:00:23.236796 | orchestrator | Tuesday 13 January 2026 00:59:41 +0000 (0:00:00.474) 0:01:18.777 ******* 2026-01-13 01:00:23.236799 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:00:23.236803 | orchestrator | 2026-01-13 01:00:23.236807 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-01-13 01:00:23.236811 | orchestrator | Tuesday 13 January 2026 00:59:43 +0000 (0:00:02.494) 0:01:21.271 ******* 2026-01-13 01:00:23.236814 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:00:23.236818 | orchestrator | 2026-01-13 01:00:23.236822 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-01-13 01:00:23.236825 | orchestrator | Tuesday 13 January 2026 00:59:46 +0000 (0:00:02.649) 0:01:23.921 ******* 2026-01-13 01:00:23.236829 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:00:23.236833 | orchestrator | 2026-01-13 01:00:23.236836 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-13 01:00:23.236840 | orchestrator | Tuesday 13 January 2026 00:59:59 +0000 (0:00:13.110) 0:01:37.031 ******* 2026-01-13 01:00:23.236844 | orchestrator | 2026-01-13 01:00:23.236848 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-13 01:00:23.236851 | orchestrator | Tuesday 13 January 2026 00:59:59 +0000 (0:00:00.121) 0:01:37.152 ******* 2026-01-13 01:00:23.236855 | orchestrator | 2026-01-13 01:00:23.236859 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-13 01:00:23.236863 | orchestrator | Tuesday 13 January 2026 00:59:59 +0000 (0:00:00.118) 0:01:37.271 ******* 2026-01-13 01:00:23.236867 | orchestrator | 2026-01-13 01:00:23.236870 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-01-13 01:00:23.236874 | orchestrator | Tuesday 13 January 2026 00:59:59 +0000 (0:00:00.066) 0:01:37.337 ******* 2026-01-13 01:00:23.236878 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:00:23.236883 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:00:23.236890 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:00:23.236896 | orchestrator | 2026-01-13 01:00:23.236902 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-01-13 01:00:23.236909 | orchestrator | Tuesday 13 January 2026 01:00:07 +0000 (0:00:07.277) 0:01:44.614 ******* 2026-01-13 01:00:23.236916 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:00:23.236922 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:00:23.236929 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:00:23.236935 | orchestrator | 2026-01-13 01:00:23.236942 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-01-13 01:00:23.236949 | orchestrator | Tuesday 13 January 2026 01:00:12 +0000 (0:00:05.055) 0:01:49.671 ******* 2026-01-13 01:00:23.236956 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:00:23.236963 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:00:23.236967 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:00:23.236970 | orchestrator | 2026-01-13 01:00:23.236974 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:00:23.236978 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-13 01:00:23.236982 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-13 01:00:23.236986 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-13 01:00:23.236990 | orchestrator | 2026-01-13 01:00:23.236993 | orchestrator | 2026-01-13 01:00:23.236997 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:00:23.237001 | orchestrator | Tuesday 13 January 2026 01:00:20 +0000 (0:00:08.673) 0:01:58.345 ******* 2026-01-13 01:00:23.237007 | orchestrator | =============================================================================== 2026-01-13 01:00:23.237011 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.69s 2026-01-13 01:00:23.237017 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.11s 2026-01-13 01:00:23.237021 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.27s 2026-01-13 01:00:23.237025 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.67s 2026-01-13 01:00:23.237031 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.74s 2026-01-13 01:00:23.237035 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.28s 2026-01-13 01:00:23.237038 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.06s 2026-01-13 01:00:23.237042 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.79s 2026-01-13 01:00:23.237046 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.39s 2026-01-13 01:00:23.237062 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.19s 2026-01-13 01:00:23.237066 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.97s 2026-01-13 01:00:23.237069 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.82s 2026-01-13 01:00:23.237073 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.39s 2026-01-13 01:00:23.237077 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.02s 2026-01-13 01:00:23.237080 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.65s 2026-01-13 01:00:23.237084 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.50s 2026-01-13 01:00:23.237089 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.49s 2026-01-13 01:00:23.237093 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.35s 2026-01-13 01:00:23.237097 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.74s 2026-01-13 01:00:23.237102 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.43s 2026-01-13 01:00:23.237106 | orchestrator | 2026-01-13 01:00:23 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:23.237110 | orchestrator | 2026-01-13 01:00:23 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:23.237171 | orchestrator | 2026-01-13 01:00:23 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:23.237867 | orchestrator | 2026-01-13 01:00:23 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:00:23.238050 | orchestrator | 2026-01-13 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:26.273171 | orchestrator | 2026-01-13 01:00:26 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 01:00:26.273228 | orchestrator | 2026-01-13 01:00:26 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:26.273705 | orchestrator | 2026-01-13 01:00:26 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:26.274366 | orchestrator | 2026-01-13 01:00:26 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:26.274844 | orchestrator | 2026-01-13 01:00:26 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:00:26.274870 | orchestrator | 2026-01-13 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:29.296671 | orchestrator | 2026-01-13 01:00:29 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 01:00:29.297336 | orchestrator | 2026-01-13 01:00:29 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:29.297984 | orchestrator | 2026-01-13 01:00:29 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:29.298735 | orchestrator | 2026-01-13 01:00:29 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:29.299667 | orchestrator | 2026-01-13 01:00:29 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:00:29.299692 | orchestrator | 2026-01-13 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:32.328895 | orchestrator | 2026-01-13 01:00:32 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state STARTED 2026-01-13 01:00:32.330557 | orchestrator | 2026-01-13 01:00:32 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:32.332673 | orchestrator | 2026-01-13 01:00:32 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:32.334503 | orchestrator | 2026-01-13 01:00:32 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:32.337861 | orchestrator | 2026-01-13 01:00:32 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:00:32.337927 | orchestrator | 2026-01-13 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:35.373893 | orchestrator | 2026-01-13 01:00:35 | INFO  | Task f5ffee94-dbe4-45cd-8561-7cdaa3c130e4 is in state SUCCESS 2026-01-13 01:00:35.374066 | orchestrator | 2026-01-13 01:00:35 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:35.374088 | orchestrator | 2026-01-13 01:00:35 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:35.374112 | orchestrator | 2026-01-13 01:00:35 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:35.374119 | orchestrator | 2026-01-13 01:00:35 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:00:35.374126 | orchestrator | 2026-01-13 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:38.396590 | orchestrator | 2026-01-13 01:00:38 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:38.396927 | orchestrator | 2026-01-13 01:00:38 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:38.397444 | orchestrator | 2026-01-13 01:00:38 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:38.398203 | orchestrator | 2026-01-13 01:00:38 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:00:38.398241 | orchestrator | 2026-01-13 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:41.449399 | orchestrator | 2026-01-13 01:00:41 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:41.450838 | orchestrator | 2026-01-13 01:00:41 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:41.453667 | orchestrator | 2026-01-13 01:00:41 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:41.456158 | orchestrator | 2026-01-13 01:00:41 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:00:41.456318 | orchestrator | 2026-01-13 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:44.500680 | orchestrator | 2026-01-13 01:00:44 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:44.501778 | orchestrator | 2026-01-13 01:00:44 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:44.507939 | orchestrator | 2026-01-13 01:00:44 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state STARTED 2026-01-13 01:00:44.510212 | orchestrator | 2026-01-13 01:00:44 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:00:44.510261 | orchestrator | 2026-01-13 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:47.580638 | orchestrator | 2026-01-13 01:00:47 | INFO  | Task dede57c8-30ae-496c-bc4c-7395ed5d10e8 is in state STARTED 2026-01-13 01:00:47.582806 | orchestrator | 2026-01-13 01:00:47 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:47.584885 | orchestrator | 2026-01-13 01:00:47 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:47.586695 | orchestrator | 2026-01-13 01:00:47 | INFO  | Task 509960b6-ea70-4d2a-833f-c277c8679a2a is in state SUCCESS 2026-01-13 01:00:47.587554 | orchestrator | 2026-01-13 01:00:47.587590 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-13 01:00:47.587599 | orchestrator | 2.16.14 2026-01-13 01:00:47.587606 | orchestrator | 2026-01-13 01:00:47.587612 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-13 01:00:47.587619 | orchestrator | 2026-01-13 01:00:47.587625 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-13 01:00:47.587631 | orchestrator | Tuesday 13 January 2026 00:58:58 +0000 (0:00:00.287) 0:00:00.287 ******* 2026-01-13 01:00:47.587637 | orchestrator | changed: [testbed-manager] 2026-01-13 01:00:47.587644 | orchestrator | 2026-01-13 01:00:47.587650 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-13 01:00:47.587657 | orchestrator | Tuesday 13 January 2026 00:59:00 +0000 (0:00:01.892) 0:00:02.179 ******* 2026-01-13 01:00:47.587663 | orchestrator | changed: [testbed-manager] 2026-01-13 01:00:47.587667 | orchestrator | 2026-01-13 01:00:47.587670 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-13 01:00:47.587676 | orchestrator | Tuesday 13 January 2026 00:59:02 +0000 (0:00:01.350) 0:00:03.530 ******* 2026-01-13 01:00:47.587683 | orchestrator | changed: [testbed-manager] 2026-01-13 01:00:47.587689 | orchestrator | 2026-01-13 01:00:47.587695 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-13 01:00:47.587702 | orchestrator | Tuesday 13 January 2026 00:59:03 +0000 (0:00:01.031) 0:00:04.562 ******* 2026-01-13 01:00:47.587709 | orchestrator | changed: [testbed-manager] 2026-01-13 01:00:47.587715 | orchestrator | 2026-01-13 01:00:47.587722 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-13 01:00:47.587729 | orchestrator | Tuesday 13 January 2026 00:59:04 +0000 (0:00:01.618) 0:00:06.180 ******* 2026-01-13 01:00:47.587733 | orchestrator | changed: [testbed-manager] 2026-01-13 01:00:47.587737 | orchestrator | 2026-01-13 01:00:47.587741 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-13 01:00:47.587754 | orchestrator | Tuesday 13 January 2026 00:59:06 +0000 (0:00:01.226) 0:00:07.407 ******* 2026-01-13 01:00:47.587758 | orchestrator | changed: [testbed-manager] 2026-01-13 01:00:47.587761 | orchestrator | 2026-01-13 01:00:47.587765 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-13 01:00:47.587769 | orchestrator | Tuesday 13 January 2026 00:59:07 +0000 (0:00:01.012) 0:00:08.420 ******* 2026-01-13 01:00:47.587773 | orchestrator | changed: [testbed-manager] 2026-01-13 01:00:47.587776 | orchestrator | 2026-01-13 01:00:47.587780 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-13 01:00:47.587784 | orchestrator | Tuesday 13 January 2026 00:59:08 +0000 (0:00:01.121) 0:00:09.541 ******* 2026-01-13 01:00:47.587788 | orchestrator | changed: [testbed-manager] 2026-01-13 01:00:47.587792 | orchestrator | 2026-01-13 01:00:47.587796 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-13 01:00:47.587799 | orchestrator | Tuesday 13 January 2026 00:59:09 +0000 (0:00:01.194) 0:00:10.736 ******* 2026-01-13 01:00:47.587815 | orchestrator | changed: [testbed-manager] 2026-01-13 01:00:47.587821 | orchestrator | 2026-01-13 01:00:47.587878 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-13 01:00:47.587886 | orchestrator | Tuesday 13 January 2026 01:00:10 +0000 (0:01:00.865) 0:01:11.602 ******* 2026-01-13 01:00:47.587889 | orchestrator | skipping: [testbed-manager] 2026-01-13 01:00:47.587893 | orchestrator | 2026-01-13 01:00:47.587897 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-13 01:00:47.587901 | orchestrator | 2026-01-13 01:00:47.587904 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-13 01:00:47.587919 | orchestrator | Tuesday 13 January 2026 01:00:10 +0000 (0:00:00.126) 0:01:11.728 ******* 2026-01-13 01:00:47.587924 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:00:47.587928 | orchestrator | 2026-01-13 01:00:47.587931 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-13 01:00:47.587935 | orchestrator | 2026-01-13 01:00:47.587939 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-13 01:00:47.587943 | orchestrator | Tuesday 13 January 2026 01:00:21 +0000 (0:00:11.353) 0:01:23.082 ******* 2026-01-13 01:00:47.587947 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:00:47.587950 | orchestrator | 2026-01-13 01:00:47.587954 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-13 01:00:47.587958 | orchestrator | 2026-01-13 01:00:47.587961 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-13 01:00:47.587965 | orchestrator | Tuesday 13 January 2026 01:00:32 +0000 (0:00:11.150) 0:01:34.232 ******* 2026-01-13 01:00:47.587969 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:00:47.587973 | orchestrator | 2026-01-13 01:00:47.587976 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:00:47.587981 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-13 01:00:47.587985 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:00:47.587989 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:00:47.587993 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:00:47.587997 | orchestrator | 2026-01-13 01:00:47.588001 | orchestrator | 2026-01-13 01:00:47.588005 | orchestrator | 2026-01-13 01:00:47.588009 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:00:47.588013 | orchestrator | Tuesday 13 January 2026 01:00:34 +0000 (0:00:01.246) 0:01:35.479 ******* 2026-01-13 01:00:47.588016 | orchestrator | =============================================================================== 2026-01-13 01:00:47.588020 | orchestrator | Create admin user ------------------------------------------------------ 60.87s 2026-01-13 01:00:47.588032 | orchestrator | Restart ceph manager service ------------------------------------------- 23.75s 2026-01-13 01:00:47.588036 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.89s 2026-01-13 01:00:47.588040 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.62s 2026-01-13 01:00:47.588044 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.35s 2026-01-13 01:00:47.588048 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.23s 2026-01-13 01:00:47.588055 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.19s 2026-01-13 01:00:47.588064 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.12s 2026-01-13 01:00:47.588087 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.03s 2026-01-13 01:00:47.588104 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.01s 2026-01-13 01:00:47.588110 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2026-01-13 01:00:47.588116 | orchestrator | 2026-01-13 01:00:47.588696 | orchestrator | 2026-01-13 01:00:47.588719 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:00:47.588724 | orchestrator | 2026-01-13 01:00:47.588728 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:00:47.588732 | orchestrator | Tuesday 13 January 2026 00:59:31 +0000 (0:00:00.215) 0:00:00.215 ******* 2026-01-13 01:00:47.588736 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:00:47.588740 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:00:47.588743 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:00:47.588747 | orchestrator | 2026-01-13 01:00:47.588751 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:00:47.588755 | orchestrator | Tuesday 13 January 2026 00:59:32 +0000 (0:00:00.227) 0:00:00.442 ******* 2026-01-13 01:00:47.588759 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-13 01:00:47.588768 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-13 01:00:47.588772 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-13 01:00:47.588776 | orchestrator | 2026-01-13 01:00:47.588779 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-13 01:00:47.588783 | orchestrator | 2026-01-13 01:00:47.588787 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-13 01:00:47.588793 | orchestrator | Tuesday 13 January 2026 00:59:32 +0000 (0:00:00.371) 0:00:00.814 ******* 2026-01-13 01:00:47.588799 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:00:47.588820 | orchestrator | 2026-01-13 01:00:47.588829 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-13 01:00:47.588835 | orchestrator | Tuesday 13 January 2026 00:59:33 +0000 (0:00:00.693) 0:00:01.507 ******* 2026-01-13 01:00:47.588862 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-01-13 01:00:47.588869 | orchestrator | 2026-01-13 01:00:47.588875 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-01-13 01:00:47.588881 | orchestrator | Tuesday 13 January 2026 00:59:37 +0000 (0:00:04.225) 0:00:05.733 ******* 2026-01-13 01:00:47.588888 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-01-13 01:00:47.588895 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-01-13 01:00:47.588901 | orchestrator | 2026-01-13 01:00:47.588908 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-01-13 01:00:47.588914 | orchestrator | Tuesday 13 January 2026 00:59:44 +0000 (0:00:06.768) 0:00:12.502 ******* 2026-01-13 01:00:47.588921 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-13 01:00:47.588928 | orchestrator | 2026-01-13 01:00:47.588935 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-01-13 01:00:47.588942 | orchestrator | Tuesday 13 January 2026 00:59:47 +0000 (0:00:03.506) 0:00:16.009 ******* 2026-01-13 01:00:47.588949 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-13 01:00:47.588956 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-01-13 01:00:47.588962 | orchestrator | 2026-01-13 01:00:47.588969 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-01-13 01:00:47.588976 | orchestrator | Tuesday 13 January 2026 00:59:52 +0000 (0:00:04.407) 0:00:20.416 ******* 2026-01-13 01:00:47.588983 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-13 01:00:47.588990 | orchestrator | 2026-01-13 01:00:47.588996 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-01-13 01:00:47.588999 | orchestrator | Tuesday 13 January 2026 00:59:56 +0000 (0:00:04.351) 0:00:24.768 ******* 2026-01-13 01:00:47.589009 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-01-13 01:00:47.589013 | orchestrator | 2026-01-13 01:00:47.589017 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-13 01:00:47.589021 | orchestrator | Tuesday 13 January 2026 01:00:00 +0000 (0:00:04.456) 0:00:29.225 ******* 2026-01-13 01:00:47.589025 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:00:47.589028 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:00:47.589032 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:00:47.589036 | orchestrator | 2026-01-13 01:00:47.589039 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-01-13 01:00:47.589043 | orchestrator | Tuesday 13 January 2026 01:00:01 +0000 (0:00:00.528) 0:00:29.753 ******* 2026-01-13 01:00:47.589049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589086 | orchestrator | 2026-01-13 01:00:47.589090 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-01-13 01:00:47.589094 | orchestrator | Tuesday 13 January 2026 01:00:02 +0000 (0:00:01.276) 0:00:31.030 ******* 2026-01-13 01:00:47.589098 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:00:47.589102 | orchestrator | 2026-01-13 01:00:47.589106 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-01-13 01:00:47.589109 | orchestrator | Tuesday 13 January 2026 01:00:02 +0000 (0:00:00.210) 0:00:31.240 ******* 2026-01-13 01:00:47.589116 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:00:47.589120 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:00:47.589124 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:00:47.589127 | orchestrator | 2026-01-13 01:00:47.589131 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-13 01:00:47.589135 | orchestrator | Tuesday 13 January 2026 01:00:03 +0000 (0:00:00.527) 0:00:31.768 ******* 2026-01-13 01:00:47.589139 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:00:47.589143 | orchestrator | 2026-01-13 01:00:47.589146 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-01-13 01:00:47.589150 | orchestrator | Tuesday 13 January 2026 01:00:03 +0000 (0:00:00.471) 0:00:32.240 ******* 2026-01-13 01:00:47.589154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589180 | orchestrator | 2026-01-13 01:00:47.589186 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-01-13 01:00:47.589191 | orchestrator | Tuesday 13 January 2026 01:00:05 +0000 (0:00:01.683) 0:00:33.923 ******* 2026-01-13 01:00:47.589197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-13 01:00:47.589207 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:00:47.589213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-13 01:00:47.589219 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:00:47.589229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-13 01:00:47.589236 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:00:47.589242 | orchestrator | 2026-01-13 01:00:47.589248 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-01-13 01:00:47.589253 | orchestrator | Tuesday 13 January 2026 01:00:06 +0000 (0:00:00.937) 0:00:34.861 ******* 2026-01-13 01:00:47.589282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-13 01:00:47.589292 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:00:47.589304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-13 01:00:47.589310 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:00:47.589317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-13 01:00:47.589323 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:00:47.589329 | orchestrator | 2026-01-13 01:00:47.589335 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-01-13 01:00:47.589342 | orchestrator | Tuesday 13 January 2026 01:00:07 +0000 (0:00:01.537) 0:00:36.398 ******* 2026-01-13 01:00:47.589353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589377 | orchestrator | 2026-01-13 01:00:47.589381 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-01-13 01:00:47.589385 | orchestrator | Tuesday 13 January 2026 01:00:09 +0000 (0:00:01.644) 0:00:38.043 ******* 2026-01-13 01:00:47.589389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589409 | orchestrator | 2026-01-13 01:00:47.589418 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-01-13 01:00:47.589431 | orchestrator | Tuesday 13 January 2026 01:00:12 +0000 (0:00:03.354) 0:00:41.397 ******* 2026-01-13 01:00:47.589437 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-13 01:00:47.589443 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-13 01:00:47.589449 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-13 01:00:47.589455 | orchestrator | 2026-01-13 01:00:47.589460 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-01-13 01:00:47.589466 | orchestrator | Tuesday 13 January 2026 01:00:14 +0000 (0:00:01.679) 0:00:43.077 ******* 2026-01-13 01:00:47.589472 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:00:47.589478 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:00:47.589484 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:00:47.589489 | orchestrator | 2026-01-13 01:00:47.589496 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-01-13 01:00:47.589502 | orchestrator | Tuesday 13 January 2026 01:00:16 +0000 (0:00:01.726) 0:00:44.804 ******* 2026-01-13 01:00:47.589508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-13 01:00:47.589515 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:00:47.589519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-13 01:00:47.589523 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:00:47.589531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-13 01:00:47.589538 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:00:47.589542 | orchestrator | 2026-01-13 01:00:47.589546 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-01-13 01:00:47.589549 | orchestrator | Tuesday 13 January 2026 01:00:16 +0000 (0:00:00.434) 0:00:45.239 ******* 2026-01-13 01:00:47.589555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-13 01:00:47.589567 | orchestrator | 2026-01-13 01:00:47.589571 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-01-13 01:00:47.589575 | orchestrator | Tuesday 13 January 2026 01:00:17 +0000 (0:00:01.018) 0:00:46.257 ******* 2026-01-13 01:00:47.589579 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:00:47.589582 | orchestrator | 2026-01-13 01:00:47.589586 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-01-13 01:00:47.589590 | orchestrator | Tuesday 13 January 2026 01:00:20 +0000 (0:00:02.751) 0:00:49.009 ******* 2026-01-13 01:00:47.589594 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:00:47.589597 | orchestrator | 2026-01-13 01:00:47.589601 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-01-13 01:00:47.589605 | orchestrator | Tuesday 13 January 2026 01:00:22 +0000 (0:00:02.211) 0:00:51.221 ******* 2026-01-13 01:00:47.589609 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:00:47.589615 | orchestrator | 2026-01-13 01:00:47.589619 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-13 01:00:47.589623 | orchestrator | Tuesday 13 January 2026 01:00:36 +0000 (0:00:13.736) 0:01:04.957 ******* 2026-01-13 01:00:47.589627 | orchestrator | 2026-01-13 01:00:47.589630 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-13 01:00:47.589634 | orchestrator | Tuesday 13 January 2026 01:00:36 +0000 (0:00:00.126) 0:01:05.084 ******* 2026-01-13 01:00:47.589638 | orchestrator | 2026-01-13 01:00:47.589644 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-13 01:00:47.589648 | orchestrator | Tuesday 13 January 2026 01:00:36 +0000 (0:00:00.113) 0:01:05.198 ******* 2026-01-13 01:00:47.589652 | orchestrator | 2026-01-13 01:00:47.589656 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-01-13 01:00:47.589659 | orchestrator | Tuesday 13 January 2026 01:00:36 +0000 (0:00:00.107) 0:01:05.306 ******* 2026-01-13 01:00:47.589663 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:00:47.589667 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:00:47.589670 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:00:47.589674 | orchestrator | 2026-01-13 01:00:47.589678 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:00:47.589684 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-13 01:00:47.589688 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-13 01:00:47.589692 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-13 01:00:47.589695 | orchestrator | 2026-01-13 01:00:47.589699 | orchestrator | 2026-01-13 01:00:47.589703 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:00:47.589707 | orchestrator | Tuesday 13 January 2026 01:00:44 +0000 (0:00:07.841) 0:01:13.147 ******* 2026-01-13 01:00:47.589710 | orchestrator | =============================================================================== 2026-01-13 01:00:47.589714 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.74s 2026-01-13 01:00:47.589718 | orchestrator | placement : Restart placement-api container ----------------------------- 7.84s 2026-01-13 01:00:47.589722 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.77s 2026-01-13 01:00:47.589725 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.46s 2026-01-13 01:00:47.589729 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.41s 2026-01-13 01:00:47.589733 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.35s 2026-01-13 01:00:47.589736 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.23s 2026-01-13 01:00:47.589740 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.51s 2026-01-13 01:00:47.589744 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.36s 2026-01-13 01:00:47.589748 | orchestrator | placement : Creating placement databases -------------------------------- 2.75s 2026-01-13 01:00:47.589751 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.21s 2026-01-13 01:00:47.589755 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.73s 2026-01-13 01:00:47.589759 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.68s 2026-01-13 01:00:47.589762 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.68s 2026-01-13 01:00:47.589766 | orchestrator | placement : Copying over config.json files for services ----------------- 1.64s 2026-01-13 01:00:47.589770 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.54s 2026-01-13 01:00:47.589776 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.28s 2026-01-13 01:00:47.589780 | orchestrator | placement : Check placement containers ---------------------------------- 1.02s 2026-01-13 01:00:47.589784 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.94s 2026-01-13 01:00:47.589787 | orchestrator | placement : include_tasks ----------------------------------------------- 0.69s 2026-01-13 01:00:47.589845 | orchestrator | 2026-01-13 01:00:47 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:00:47.589851 | orchestrator | 2026-01-13 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:50.651978 | orchestrator | 2026-01-13 01:00:50 | INFO  | Task dede57c8-30ae-496c-bc4c-7395ed5d10e8 is in state STARTED 2026-01-13 01:00:50.653026 | orchestrator | 2026-01-13 01:00:50 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:50.654224 | orchestrator | 2026-01-13 01:00:50 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:50.656265 | orchestrator | 2026-01-13 01:00:50 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:00:50.656295 | orchestrator | 2026-01-13 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:53.704011 | orchestrator | 2026-01-13 01:00:53 | INFO  | Task dede57c8-30ae-496c-bc4c-7395ed5d10e8 is in state SUCCESS 2026-01-13 01:00:53.706172 | orchestrator | 2026-01-13 01:00:53 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:53.708366 | orchestrator | 2026-01-13 01:00:53 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:53.709733 | orchestrator | 2026-01-13 01:00:53 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:00:53.712302 | orchestrator | 2026-01-13 01:00:53 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:00:53.712356 | orchestrator | 2026-01-13 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:56.769810 | orchestrator | 2026-01-13 01:00:56 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:56.770087 | orchestrator | 2026-01-13 01:00:56 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:56.770673 | orchestrator | 2026-01-13 01:00:56 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:00:56.772805 | orchestrator | 2026-01-13 01:00:56 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:00:56.772842 | orchestrator | 2026-01-13 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:00:59.802501 | orchestrator | 2026-01-13 01:00:59 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:00:59.802555 | orchestrator | 2026-01-13 01:00:59 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:00:59.803871 | orchestrator | 2026-01-13 01:00:59 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:00:59.804489 | orchestrator | 2026-01-13 01:00:59 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:00:59.804527 | orchestrator | 2026-01-13 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:02.850559 | orchestrator | 2026-01-13 01:01:02 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:02.851529 | orchestrator | 2026-01-13 01:01:02 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:01:02.852215 | orchestrator | 2026-01-13 01:01:02 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:02.852812 | orchestrator | 2026-01-13 01:01:02 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:02.853985 | orchestrator | 2026-01-13 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:05.920253 | orchestrator | 2026-01-13 01:01:05 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:05.921135 | orchestrator | 2026-01-13 01:01:05 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:01:05.922505 | orchestrator | 2026-01-13 01:01:05 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:05.926229 | orchestrator | 2026-01-13 01:01:05 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:05.926282 | orchestrator | 2026-01-13 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:08.967394 | orchestrator | 2026-01-13 01:01:08 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:08.968133 | orchestrator | 2026-01-13 01:01:08 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:01:08.969079 | orchestrator | 2026-01-13 01:01:08 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:08.969896 | orchestrator | 2026-01-13 01:01:08 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:08.969956 | orchestrator | 2026-01-13 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:12.011210 | orchestrator | 2026-01-13 01:01:12 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:12.013978 | orchestrator | 2026-01-13 01:01:12 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state STARTED 2026-01-13 01:01:12.016188 | orchestrator | 2026-01-13 01:01:12 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:12.018551 | orchestrator | 2026-01-13 01:01:12 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:12.018624 | orchestrator | 2026-01-13 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:15.132590 | orchestrator | 2026-01-13 01:01:15.132646 | orchestrator | 2026-01-13 01:01:15.132653 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:01:15.132657 | orchestrator | 2026-01-13 01:01:15.132661 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:01:15.132665 | orchestrator | Tuesday 13 January 2026 01:00:49 +0000 (0:00:00.201) 0:00:00.201 ******* 2026-01-13 01:01:15.132669 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:01:15.132674 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:01:15.132678 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:01:15.132682 | orchestrator | 2026-01-13 01:01:15.132685 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:01:15.132690 | orchestrator | Tuesday 13 January 2026 01:00:49 +0000 (0:00:00.295) 0:00:00.496 ******* 2026-01-13 01:01:15.132697 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-13 01:01:15.132752 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-13 01:01:15.132761 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-13 01:01:15.132793 | orchestrator | 2026-01-13 01:01:15.132822 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-01-13 01:01:15.132890 | orchestrator | 2026-01-13 01:01:15.132896 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-01-13 01:01:15.132900 | orchestrator | Tuesday 13 January 2026 01:00:50 +0000 (0:00:00.643) 0:00:01.139 ******* 2026-01-13 01:01:15.132904 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:01:15.132921 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:01:15.132953 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:01:15.133057 | orchestrator | 2026-01-13 01:01:15.133062 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:01:15.133067 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:01:15.133071 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:01:15.133075 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:01:15.133079 | orchestrator | 2026-01-13 01:01:15.133082 | orchestrator | 2026-01-13 01:01:15.133086 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:01:15.133090 | orchestrator | Tuesday 13 January 2026 01:00:51 +0000 (0:00:00.704) 0:00:01.844 ******* 2026-01-13 01:01:15.133093 | orchestrator | =============================================================================== 2026-01-13 01:01:15.133097 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.70s 2026-01-13 01:01:15.133101 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-01-13 01:01:15.133104 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-01-13 01:01:15.133108 | orchestrator | 2026-01-13 01:01:15.133112 | orchestrator | 2026-01-13 01:01:15.133115 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:01:15.133119 | orchestrator | 2026-01-13 01:01:15.133123 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:01:15.133126 | orchestrator | Tuesday 13 January 2026 00:58:23 +0000 (0:00:00.296) 0:00:00.296 ******* 2026-01-13 01:01:15.133130 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:01:15.133134 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:01:15.133137 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:01:15.133141 | orchestrator | 2026-01-13 01:01:15.133145 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:01:15.133148 | orchestrator | Tuesday 13 January 2026 00:58:23 +0000 (0:00:00.318) 0:00:00.615 ******* 2026-01-13 01:01:15.133152 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-13 01:01:15.133156 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-13 01:01:15.133160 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-13 01:01:15.133163 | orchestrator | 2026-01-13 01:01:15.133167 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-13 01:01:15.133171 | orchestrator | 2026-01-13 01:01:15.133177 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-13 01:01:15.133183 | orchestrator | Tuesday 13 January 2026 00:58:23 +0000 (0:00:00.510) 0:00:01.125 ******* 2026-01-13 01:01:15.133190 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:01:15.133196 | orchestrator | 2026-01-13 01:01:15.133202 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-13 01:01:15.133214 | orchestrator | Tuesday 13 January 2026 00:58:24 +0000 (0:00:00.507) 0:00:01.633 ******* 2026-01-13 01:01:15.133220 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-01-13 01:01:15.133226 | orchestrator | 2026-01-13 01:01:15.133232 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-01-13 01:01:15.133238 | orchestrator | Tuesday 13 January 2026 00:58:28 +0000 (0:00:04.383) 0:00:06.016 ******* 2026-01-13 01:01:15.133243 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-01-13 01:01:15.133249 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-01-13 01:01:15.133256 | orchestrator | 2026-01-13 01:01:15.133263 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-01-13 01:01:15.133298 | orchestrator | Tuesday 13 January 2026 00:58:36 +0000 (0:00:07.449) 0:00:13.466 ******* 2026-01-13 01:01:15.133323 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-13 01:01:15.133327 | orchestrator | 2026-01-13 01:01:15.133332 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-01-13 01:01:15.133336 | orchestrator | Tuesday 13 January 2026 00:58:39 +0000 (0:00:03.459) 0:00:16.925 ******* 2026-01-13 01:01:15.133437 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-13 01:01:15.133449 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-01-13 01:01:15.133456 | orchestrator | 2026-01-13 01:01:15.133471 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-01-13 01:01:15.133478 | orchestrator | Tuesday 13 January 2026 00:58:44 +0000 (0:00:04.981) 0:00:21.907 ******* 2026-01-13 01:01:15.133485 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-13 01:01:15.133491 | orchestrator | 2026-01-13 01:01:15.133499 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-01-13 01:01:15.133505 | orchestrator | Tuesday 13 January 2026 00:58:48 +0000 (0:00:03.586) 0:00:25.494 ******* 2026-01-13 01:01:15.133516 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-01-13 01:01:15.133521 | orchestrator | 2026-01-13 01:01:15.133526 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-01-13 01:01:15.133584 | orchestrator | Tuesday 13 January 2026 00:58:52 +0000 (0:00:04.196) 0:00:29.690 ******* 2026-01-13 01:01:15.133735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.133782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.133791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.133808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.133823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.133834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.133841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.133853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.133860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.133870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.133880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.133886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.133895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.133902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.133908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.133990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134101 | orchestrator | 2026-01-13 01:01:15.134107 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-01-13 01:01:15.134111 | orchestrator | Tuesday 13 January 2026 00:58:55 +0000 (0:00:03.512) 0:00:33.203 ******* 2026-01-13 01:01:15.134114 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:01:15.134118 | orchestrator | 2026-01-13 01:01:15.134122 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-01-13 01:01:15.134126 | orchestrator | Tuesday 13 January 2026 00:58:56 +0000 (0:00:00.127) 0:00:33.330 ******* 2026-01-13 01:01:15.134130 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:01:15.134133 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:01:15.134137 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:01:15.134141 | orchestrator | 2026-01-13 01:01:15.134144 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-13 01:01:15.134148 | orchestrator | Tuesday 13 January 2026 00:58:56 +0000 (0:00:00.288) 0:00:33.619 ******* 2026-01-13 01:01:15.134152 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:01:15.134156 | orchestrator | 2026-01-13 01:01:15.134160 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-01-13 01:01:15.134163 | orchestrator | Tuesday 13 January 2026 00:58:57 +0000 (0:00:00.765) 0:00:34.385 ******* 2026-01-13 01:01:15.134171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.134175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.134183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.134190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.134270 | orchestrator | 2026-01-13 01:01:15.134274 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-01-13 01:01:15.134278 | orchestrator | Tuesday 13 January 2026 00:59:03 +0000 (0:00:06.760) 0:00:41.146 ******* 2026-01-13 01:01:15.134284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.134288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 01:01:15.134294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134313 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:01:15.134318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.134323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 01:01:15.134329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'reg2026-01-13 01:01:15 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:15.134333 | orchestrator | 2026-01-13 01:01:15 | INFO  | Task 9e2ad8df-a6a3-46e8-8666-5e5005fac8d4 is in state SUCCESS 2026-01-13 01:01:15.134549 | orchestrator | istry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134633 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:01:15.134649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.134667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 01:01:15.134686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134714 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:01:15.134721 | orchestrator | 2026-01-13 01:01:15.134728 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-01-13 01:01:15.134736 | orchestrator | Tuesday 13 January 2026 00:59:05 +0000 (0:00:01.991) 0:00:43.138 ******* 2026-01-13 01:01:15.134747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.134762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 01:01:15.134775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.134819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 01:01:15.134826 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:01:15.134837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134864 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:01:15.134872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.134886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 01:01:15.134894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.134946 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:01:15.134954 | orchestrator | 2026-01-13 01:01:15.134961 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-01-13 01:01:15.134968 | orchestrator | Tuesday 13 January 2026 00:59:07 +0000 (0:00:02.090) 0:00:45.228 ******* 2026-01-13 01:01:15.134975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.134991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.135005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.135013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135167 | orchestrator | 2026-01-13 01:01:15.135175 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-01-13 01:01:15.135183 | orchestrator | Tuesday 13 January 2026 00:59:14 +0000 (0:00:06.970) 0:00:52.198 ******* 2026-01-13 01:01:15.135199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.135212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.135222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.135236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135385 | orchestrator | 2026-01-13 01:01:15.135393 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-01-13 01:01:15.135404 | orchestrator | Tuesday 13 January 2026 00:59:37 +0000 (0:00:22.158) 0:01:14.357 ******* 2026-01-13 01:01:15.135412 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-13 01:01:15.135420 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-13 01:01:15.135428 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-13 01:01:15.135434 | orchestrator | 2026-01-13 01:01:15.135440 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-01-13 01:01:15.135447 | orchestrator | Tuesday 13 January 2026 00:59:42 +0000 (0:00:05.526) 0:01:19.884 ******* 2026-01-13 01:01:15.135455 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-13 01:01:15.135462 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-13 01:01:15.135470 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-13 01:01:15.135478 | orchestrator | 2026-01-13 01:01:15.135485 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-01-13 01:01:15.135493 | orchestrator | Tuesday 13 January 2026 00:59:45 +0000 (0:00:03.049) 0:01:22.933 ******* 2026-01-13 01:01:15.135501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.135513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.135526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.135540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135684 | orchestrator | 2026-01-13 01:01:15.135692 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-01-13 01:01:15.135699 | orchestrator | Tuesday 13 January 2026 00:59:49 +0000 (0:00:03.988) 0:01:26.921 ******* 2026-01-13 01:01:15.135707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.135719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.135728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.135745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.135861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.135946 | orchestrator | 2026-01-13 01:01:15.135953 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-13 01:01:15.135960 | orchestrator | Tuesday 13 January 2026 00:59:53 +0000 (0:00:03.645) 0:01:30.567 ******* 2026-01-13 01:01:15.135968 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:01:15.135976 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:01:15.135983 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:01:15.135990 | orchestrator | 2026-01-13 01:01:15.135997 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-01-13 01:01:15.136004 | orchestrator | Tuesday 13 January 2026 00:59:54 +0000 (0:00:00.783) 0:01:31.350 ******* 2026-01-13 01:01:15.136012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.136033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 01:01:15.136046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.136058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.136071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.136080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.136088 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:01:15.136096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.136103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 01:01:15.136114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.136126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.136137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.136145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.136152 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:01:15.136159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-13 01:01:15.136167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-13 01:01:15.136176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.136188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.136196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.136208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:01:15.136215 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:01:15.136222 | orchestrator | 2026-01-13 01:01:15.136229 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-01-13 01:01:15.136237 | orchestrator | Tuesday 13 January 2026 00:59:54 +0000 (0:00:00.716) 0:01:32.067 ******* 2026-01-13 01:01:15.136244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.136252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.136264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-13 01:01:15.136272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:01:15.136436 | orchestrator | 2026-01-13 01:01:15.136444 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-13 01:01:15.136452 | orchestrator | Tuesday 13 January 2026 01:00:00 +0000 (0:00:05.229) 0:01:37.296 ******* 2026-01-13 01:01:15.136459 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:01:15.136467 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:01:15.136474 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:01:15.136481 | orchestrator | 2026-01-13 01:01:15.136489 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-01-13 01:01:15.136496 | orchestrator | Tuesday 13 January 2026 01:00:00 +0000 (0:00:00.585) 0:01:37.881 ******* 2026-01-13 01:01:15.136504 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-01-13 01:01:15.136512 | orchestrator | 2026-01-13 01:01:15.136519 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-01-13 01:01:15.136526 | orchestrator | Tuesday 13 January 2026 01:00:03 +0000 (0:00:02.643) 0:01:40.525 ******* 2026-01-13 01:01:15.136533 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-13 01:01:15.136540 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-01-13 01:01:15.136547 | orchestrator | 2026-01-13 01:01:15.136554 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-01-13 01:01:15.136561 | orchestrator | Tuesday 13 January 2026 01:00:05 +0000 (0:00:02.537) 0:01:43.063 ******* 2026-01-13 01:01:15.136568 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:01:15.136574 | orchestrator | 2026-01-13 01:01:15.136581 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-13 01:01:15.136588 | orchestrator | Tuesday 13 January 2026 01:00:21 +0000 (0:00:15.357) 0:01:58.421 ******* 2026-01-13 01:01:15.136602 | orchestrator | 2026-01-13 01:01:15.136609 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-13 01:01:15.136616 | orchestrator | Tuesday 13 January 2026 01:00:21 +0000 (0:00:00.063) 0:01:58.484 ******* 2026-01-13 01:01:15.136624 | orchestrator | 2026-01-13 01:01:15.136631 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-13 01:01:15.136638 | orchestrator | Tuesday 13 January 2026 01:00:21 +0000 (0:00:00.073) 0:01:58.558 ******* 2026-01-13 01:01:15.136645 | orchestrator | 2026-01-13 01:01:15.136652 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-01-13 01:01:15.136659 | orchestrator | Tuesday 13 January 2026 01:00:21 +0000 (0:00:00.069) 0:01:58.627 ******* 2026-01-13 01:01:15.136667 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:01:15.136674 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:01:15.136681 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:01:15.136688 | orchestrator | 2026-01-13 01:01:15.136695 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-01-13 01:01:15.136703 | orchestrator | Tuesday 13 January 2026 01:00:34 +0000 (0:00:13.089) 0:02:11.717 ******* 2026-01-13 01:01:15.136710 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:01:15.136717 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:01:15.136725 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:01:15.136732 | orchestrator | 2026-01-13 01:01:15.136739 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-01-13 01:01:15.136746 | orchestrator | Tuesday 13 January 2026 01:00:41 +0000 (0:00:06.992) 0:02:18.709 ******* 2026-01-13 01:01:15.136754 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:01:15.136760 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:01:15.136768 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:01:15.136775 | orchestrator | 2026-01-13 01:01:15.136782 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-01-13 01:01:15.136788 | orchestrator | Tuesday 13 January 2026 01:00:51 +0000 (0:00:09.724) 0:02:28.434 ******* 2026-01-13 01:01:15.136796 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:01:15.136803 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:01:15.136810 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:01:15.136817 | orchestrator | 2026-01-13 01:01:15.136825 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-01-13 01:01:15.136832 | orchestrator | Tuesday 13 January 2026 01:00:56 +0000 (0:00:05.027) 0:02:33.461 ******* 2026-01-13 01:01:15.136843 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:01:15.136850 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:01:15.136857 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:01:15.136864 | orchestrator | 2026-01-13 01:01:15.136871 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-01-13 01:01:15.136879 | orchestrator | Tuesday 13 January 2026 01:01:01 +0000 (0:00:04.871) 0:02:38.333 ******* 2026-01-13 01:01:15.136886 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:01:15.136892 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:01:15.136897 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:01:15.136903 | orchestrator | 2026-01-13 01:01:15.136908 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-01-13 01:01:15.136914 | orchestrator | Tuesday 13 January 2026 01:01:07 +0000 (0:00:06.020) 0:02:44.353 ******* 2026-01-13 01:01:15.136919 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:01:15.136924 | orchestrator | 2026-01-13 01:01:15.136930 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:01:15.136936 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-13 01:01:15.136941 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-13 01:01:15.136952 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-13 01:01:15.136959 | orchestrator | 2026-01-13 01:01:15.136965 | orchestrator | 2026-01-13 01:01:15.136978 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:01:15.136985 | orchestrator | Tuesday 13 January 2026 01:01:14 +0000 (0:00:07.211) 0:02:51.565 ******* 2026-01-13 01:01:15.136991 | orchestrator | =============================================================================== 2026-01-13 01:01:15.136998 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.16s 2026-01-13 01:01:15.137005 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.36s 2026-01-13 01:01:15.137012 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.09s 2026-01-13 01:01:15.137017 | orchestrator | designate : Restart designate-central container ------------------------- 9.72s 2026-01-13 01:01:15.137041 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.45s 2026-01-13 01:01:15.137049 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.21s 2026-01-13 01:01:15.137056 | orchestrator | designate : Restart designate-api container ----------------------------- 6.99s 2026-01-13 01:01:15.137063 | orchestrator | designate : Copying over config.json files for services ----------------- 6.97s 2026-01-13 01:01:15.137070 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.76s 2026-01-13 01:01:15.137077 | orchestrator | designate : Restart designate-worker container -------------------------- 6.02s 2026-01-13 01:01:15.137083 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.53s 2026-01-13 01:01:15.137089 | orchestrator | designate : Check designate containers ---------------------------------- 5.23s 2026-01-13 01:01:15.137096 | orchestrator | designate : Restart designate-producer container ------------------------ 5.03s 2026-01-13 01:01:15.137103 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.98s 2026-01-13 01:01:15.137110 | orchestrator | designate : Restart designate-mdns container ---------------------------- 4.87s 2026-01-13 01:01:15.137117 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.38s 2026-01-13 01:01:15.137124 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.20s 2026-01-13 01:01:15.137131 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.99s 2026-01-13 01:01:15.137139 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.65s 2026-01-13 01:01:15.137146 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.59s 2026-01-13 01:01:15.137153 | orchestrator | 2026-01-13 01:01:15 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:15.137160 | orchestrator | 2026-01-13 01:01:15 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:15.137168 | orchestrator | 2026-01-13 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:18.166835 | orchestrator | 2026-01-13 01:01:18 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:18.168990 | orchestrator | 2026-01-13 01:01:18 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state STARTED 2026-01-13 01:01:18.171500 | orchestrator | 2026-01-13 01:01:18 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:18.174184 | orchestrator | 2026-01-13 01:01:18 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:18.174249 | orchestrator | 2026-01-13 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:21.210652 | orchestrator | 2026-01-13 01:01:21 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:21.212208 | orchestrator | 2026-01-13 01:01:21 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state STARTED 2026-01-13 01:01:21.212975 | orchestrator | 2026-01-13 01:01:21 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:21.213896 | orchestrator | 2026-01-13 01:01:21 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:21.213993 | orchestrator | 2026-01-13 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:24.254599 | orchestrator | 2026-01-13 01:01:24 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:24.256211 | orchestrator | 2026-01-13 01:01:24 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state STARTED 2026-01-13 01:01:24.256970 | orchestrator | 2026-01-13 01:01:24 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:24.257816 | orchestrator | 2026-01-13 01:01:24 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:24.257834 | orchestrator | 2026-01-13 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:27.298607 | orchestrator | 2026-01-13 01:01:27 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:27.298741 | orchestrator | 2026-01-13 01:01:27 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state STARTED 2026-01-13 01:01:27.298813 | orchestrator | 2026-01-13 01:01:27 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:27.300227 | orchestrator | 2026-01-13 01:01:27 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:27.300293 | orchestrator | 2026-01-13 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:30.335970 | orchestrator | 2026-01-13 01:01:30 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:30.336191 | orchestrator | 2026-01-13 01:01:30 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state STARTED 2026-01-13 01:01:30.336768 | orchestrator | 2026-01-13 01:01:30 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:30.338352 | orchestrator | 2026-01-13 01:01:30 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:30.338393 | orchestrator | 2026-01-13 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:33.388194 | orchestrator | 2026-01-13 01:01:33 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:33.388679 | orchestrator | 2026-01-13 01:01:33 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state STARTED 2026-01-13 01:01:33.389487 | orchestrator | 2026-01-13 01:01:33 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:33.391070 | orchestrator | 2026-01-13 01:01:33 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:33.391107 | orchestrator | 2026-01-13 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:36.486773 | orchestrator | 2026-01-13 01:01:36 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:36.487677 | orchestrator | 2026-01-13 01:01:36 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state STARTED 2026-01-13 01:01:36.488717 | orchestrator | 2026-01-13 01:01:36 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:36.489468 | orchestrator | 2026-01-13 01:01:36 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:36.490147 | orchestrator | 2026-01-13 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:39.533237 | orchestrator | 2026-01-13 01:01:39 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:39.534516 | orchestrator | 2026-01-13 01:01:39 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state STARTED 2026-01-13 01:01:39.535306 | orchestrator | 2026-01-13 01:01:39 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:39.536083 | orchestrator | 2026-01-13 01:01:39 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:39.536133 | orchestrator | 2026-01-13 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:42.569375 | orchestrator | 2026-01-13 01:01:42 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:42.570338 | orchestrator | 2026-01-13 01:01:42 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state STARTED 2026-01-13 01:01:42.571434 | orchestrator | 2026-01-13 01:01:42 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:42.573601 | orchestrator | 2026-01-13 01:01:42 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:42.573636 | orchestrator | 2026-01-13 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:45.620621 | orchestrator | 2026-01-13 01:01:45 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:45.622398 | orchestrator | 2026-01-13 01:01:45 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state STARTED 2026-01-13 01:01:45.624528 | orchestrator | 2026-01-13 01:01:45 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:45.626357 | orchestrator | 2026-01-13 01:01:45 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:45.626393 | orchestrator | 2026-01-13 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:48.676711 | orchestrator | 2026-01-13 01:01:48 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:48.678814 | orchestrator | 2026-01-13 01:01:48 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state STARTED 2026-01-13 01:01:48.680911 | orchestrator | 2026-01-13 01:01:48 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:48.682768 | orchestrator | 2026-01-13 01:01:48 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:48.682878 | orchestrator | 2026-01-13 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:51.750535 | orchestrator | 2026-01-13 01:01:51 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:51.751638 | orchestrator | 2026-01-13 01:01:51 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state STARTED 2026-01-13 01:01:51.753827 | orchestrator | 2026-01-13 01:01:51 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:51.754795 | orchestrator | 2026-01-13 01:01:51 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:51.755090 | orchestrator | 2026-01-13 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:54.792926 | orchestrator | 2026-01-13 01:01:54 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:54.794168 | orchestrator | 2026-01-13 01:01:54 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state STARTED 2026-01-13 01:01:54.795508 | orchestrator | 2026-01-13 01:01:54 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:54.796311 | orchestrator | 2026-01-13 01:01:54 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:54.796371 | orchestrator | 2026-01-13 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:01:57.825575 | orchestrator | 2026-01-13 01:01:57 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:01:57.826036 | orchestrator | 2026-01-13 01:01:57 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state STARTED 2026-01-13 01:01:57.826719 | orchestrator | 2026-01-13 01:01:57 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:01:57.827422 | orchestrator | 2026-01-13 01:01:57 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:01:57.827453 | orchestrator | 2026-01-13 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:00.850422 | orchestrator | 2026-01-13 01:02:00 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:00.852174 | orchestrator | 2026-01-13 01:02:00 | INFO  | Task 7ee72a7a-c81e-4cba-bad5-963c52381c77 is in state SUCCESS 2026-01-13 01:02:00.852499 | orchestrator | 2026-01-13 01:02:00 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:00.853138 | orchestrator | 2026-01-13 01:02:00 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:02:00.853866 | orchestrator | 2026-01-13 01:02:00 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:00.853894 | orchestrator | 2026-01-13 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:03.880142 | orchestrator | 2026-01-13 01:02:03 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:03.880385 | orchestrator | 2026-01-13 01:02:03 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:03.881209 | orchestrator | 2026-01-13 01:02:03 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:02:03.882503 | orchestrator | 2026-01-13 01:02:03 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:03.882535 | orchestrator | 2026-01-13 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:06.928747 | orchestrator | 2026-01-13 01:02:06 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:06.929630 | orchestrator | 2026-01-13 01:02:06 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:06.931450 | orchestrator | 2026-01-13 01:02:06 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:02:06.932819 | orchestrator | 2026-01-13 01:02:06 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:06.933149 | orchestrator | 2026-01-13 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:09.986795 | orchestrator | 2026-01-13 01:02:09 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:09.991420 | orchestrator | 2026-01-13 01:02:09 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:09.992205 | orchestrator | 2026-01-13 01:02:09 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:02:09.993466 | orchestrator | 2026-01-13 01:02:09 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:09.993509 | orchestrator | 2026-01-13 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:13.046194 | orchestrator | 2026-01-13 01:02:13 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:13.047360 | orchestrator | 2026-01-13 01:02:13 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:13.049615 | orchestrator | 2026-01-13 01:02:13 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:02:13.050602 | orchestrator | 2026-01-13 01:02:13 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:13.050937 | orchestrator | 2026-01-13 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:16.086700 | orchestrator | 2026-01-13 01:02:16 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:16.088244 | orchestrator | 2026-01-13 01:02:16 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:16.089980 | orchestrator | 2026-01-13 01:02:16 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:02:16.091504 | orchestrator | 2026-01-13 01:02:16 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:16.091803 | orchestrator | 2026-01-13 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:19.129588 | orchestrator | 2026-01-13 01:02:19 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:19.130767 | orchestrator | 2026-01-13 01:02:19 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:19.131668 | orchestrator | 2026-01-13 01:02:19 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state STARTED 2026-01-13 01:02:19.134376 | orchestrator | 2026-01-13 01:02:19 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:19.134426 | orchestrator | 2026-01-13 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:22.171911 | orchestrator | 2026-01-13 01:02:22 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:22.171995 | orchestrator | 2026-01-13 01:02:22 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:22.172969 | orchestrator | 2026-01-13 01:02:22 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:02:22.173613 | orchestrator | 2026-01-13 01:02:22 | INFO  | Task 2d57ce62-ba5d-4095-a213-2e324d9777a2 is in state SUCCESS 2026-01-13 01:02:22.174978 | orchestrator | 2026-01-13 01:02:22.174998 | orchestrator | 2026-01-13 01:02:22.175002 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:02:22.175006 | orchestrator | 2026-01-13 01:02:22.175010 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:02:22.175013 | orchestrator | Tuesday 13 January 2026 01:01:20 +0000 (0:00:00.464) 0:00:00.464 ******* 2026-01-13 01:02:22.175017 | orchestrator | ok: [testbed-manager] 2026-01-13 01:02:22.175021 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:02:22.175024 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:02:22.175036 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:02:22.175039 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:02:22.175042 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:02:22.175046 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:02:22.175051 | orchestrator | 2026-01-13 01:02:22.175056 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:02:22.175062 | orchestrator | Tuesday 13 January 2026 01:01:22 +0000 (0:00:01.687) 0:00:02.151 ******* 2026-01-13 01:02:22.175067 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-01-13 01:02:22.175072 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-01-13 01:02:22.175078 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-01-13 01:02:22.175083 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-01-13 01:02:22.175119 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-01-13 01:02:22.175125 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-01-13 01:02:22.175130 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-01-13 01:02:22.175136 | orchestrator | 2026-01-13 01:02:22.175141 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-13 01:02:22.175146 | orchestrator | 2026-01-13 01:02:22.175163 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-01-13 01:02:22.175169 | orchestrator | Tuesday 13 January 2026 01:01:24 +0000 (0:00:02.389) 0:00:04.540 ******* 2026-01-13 01:02:22.175200 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 01:02:22.175206 | orchestrator | 2026-01-13 01:02:22.175212 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-01-13 01:02:22.175217 | orchestrator | Tuesday 13 January 2026 01:01:29 +0000 (0:00:05.171) 0:00:09.712 ******* 2026-01-13 01:02:22.175248 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-01-13 01:02:22.175254 | orchestrator | 2026-01-13 01:02:22.175259 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-01-13 01:02:22.175265 | orchestrator | Tuesday 13 January 2026 01:01:33 +0000 (0:00:03.719) 0:00:13.433 ******* 2026-01-13 01:02:22.175271 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-01-13 01:02:22.175277 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-01-13 01:02:22.175283 | orchestrator | 2026-01-13 01:02:22.175288 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-01-13 01:02:22.175294 | orchestrator | Tuesday 13 January 2026 01:01:40 +0000 (0:00:06.831) 0:00:20.265 ******* 2026-01-13 01:02:22.175301 | orchestrator | ok: [testbed-manager] => (item=service) 2026-01-13 01:02:22.175306 | orchestrator | 2026-01-13 01:02:22.175312 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-01-13 01:02:22.175318 | orchestrator | Tuesday 13 January 2026 01:01:42 +0000 (0:00:02.703) 0:00:22.969 ******* 2026-01-13 01:02:22.175323 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-13 01:02:22.175329 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-01-13 01:02:22.175335 | orchestrator | 2026-01-13 01:02:22.175340 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-01-13 01:02:22.175345 | orchestrator | Tuesday 13 January 2026 01:01:46 +0000 (0:00:03.351) 0:00:26.320 ******* 2026-01-13 01:02:22.175351 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-01-13 01:02:22.175356 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-01-13 01:02:22.175361 | orchestrator | 2026-01-13 01:02:22.175365 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-01-13 01:02:22.175371 | orchestrator | Tuesday 13 January 2026 01:01:52 +0000 (0:00:05.996) 0:00:32.316 ******* 2026-01-13 01:02:22.175376 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-01-13 01:02:22.175381 | orchestrator | 2026-01-13 01:02:22.175386 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:02:22.175391 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:02:22.175396 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:02:22.175401 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:02:22.175406 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:02:22.175417 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:02:22.175431 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:02:22.175437 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:02:22.175442 | orchestrator | 2026-01-13 01:02:22.175447 | orchestrator | 2026-01-13 01:02:22.175452 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:02:22.175457 | orchestrator | Tuesday 13 January 2026 01:01:57 +0000 (0:00:05.406) 0:00:37.723 ******* 2026-01-13 01:02:22.175466 | orchestrator | =============================================================================== 2026-01-13 01:02:22.175471 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.83s 2026-01-13 01:02:22.175476 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.00s 2026-01-13 01:02:22.175481 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.41s 2026-01-13 01:02:22.175487 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 5.17s 2026-01-13 01:02:22.175491 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.72s 2026-01-13 01:02:22.175495 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.35s 2026-01-13 01:02:22.175500 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.70s 2026-01-13 01:02:22.175505 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.39s 2026-01-13 01:02:22.175510 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.69s 2026-01-13 01:02:22.175515 | orchestrator | 2026-01-13 01:02:22.175520 | orchestrator | 2026-01-13 01:02:22.175525 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:02:22.175530 | orchestrator | 2026-01-13 01:02:22.175535 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:02:22.175540 | orchestrator | Tuesday 13 January 2026 01:00:27 +0000 (0:00:00.542) 0:00:00.542 ******* 2026-01-13 01:02:22.175545 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:02:22.175550 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:02:22.175555 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:02:22.175561 | orchestrator | 2026-01-13 01:02:22.175566 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:02:22.175571 | orchestrator | Tuesday 13 January 2026 01:00:28 +0000 (0:00:00.329) 0:00:00.871 ******* 2026-01-13 01:02:22.175576 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-13 01:02:22.175581 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-13 01:02:22.175586 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-13 01:02:22.175592 | orchestrator | 2026-01-13 01:02:22.175597 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-13 01:02:22.175602 | orchestrator | 2026-01-13 01:02:22.175608 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-13 01:02:22.175613 | orchestrator | Tuesday 13 January 2026 01:00:28 +0000 (0:00:00.381) 0:00:01.253 ******* 2026-01-13 01:02:22.175618 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:02:22.175623 | orchestrator | 2026-01-13 01:02:22.175628 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-13 01:02:22.175634 | orchestrator | Tuesday 13 January 2026 01:00:29 +0000 (0:00:00.604) 0:00:01.858 ******* 2026-01-13 01:02:22.175639 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-01-13 01:02:22.175644 | orchestrator | 2026-01-13 01:02:22.175650 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-01-13 01:02:22.175659 | orchestrator | Tuesday 13 January 2026 01:00:33 +0000 (0:00:03.891) 0:00:05.749 ******* 2026-01-13 01:02:22.175665 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-01-13 01:02:22.175670 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-01-13 01:02:22.175676 | orchestrator | 2026-01-13 01:02:22.175681 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-01-13 01:02:22.175686 | orchestrator | Tuesday 13 January 2026 01:00:39 +0000 (0:00:06.759) 0:00:12.508 ******* 2026-01-13 01:02:22.175691 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-13 01:02:22.175697 | orchestrator | 2026-01-13 01:02:22.175702 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-01-13 01:02:22.175707 | orchestrator | Tuesday 13 January 2026 01:00:42 +0000 (0:00:03.073) 0:00:15.581 ******* 2026-01-13 01:02:22.175712 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-13 01:02:22.175717 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-01-13 01:02:22.175723 | orchestrator | 2026-01-13 01:02:22.175728 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-01-13 01:02:22.175733 | orchestrator | Tuesday 13 January 2026 01:00:46 +0000 (0:00:03.983) 0:00:19.565 ******* 2026-01-13 01:02:22.175739 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-13 01:02:22.175744 | orchestrator | 2026-01-13 01:02:22.175749 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-01-13 01:02:22.175754 | orchestrator | Tuesday 13 January 2026 01:00:50 +0000 (0:00:03.232) 0:00:22.797 ******* 2026-01-13 01:02:22.175760 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-01-13 01:02:22.175765 | orchestrator | 2026-01-13 01:02:22.175770 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-01-13 01:02:22.175776 | orchestrator | Tuesday 13 January 2026 01:00:53 +0000 (0:00:03.616) 0:00:26.414 ******* 2026-01-13 01:02:22.175781 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:02:22.175786 | orchestrator | 2026-01-13 01:02:22.175791 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-01-13 01:02:22.175801 | orchestrator | Tuesday 13 January 2026 01:00:56 +0000 (0:00:03.250) 0:00:29.665 ******* 2026-01-13 01:02:22.175807 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:02:22.175812 | orchestrator | 2026-01-13 01:02:22.175817 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-01-13 01:02:22.175822 | orchestrator | Tuesday 13 January 2026 01:01:00 +0000 (0:00:03.956) 0:00:33.621 ******* 2026-01-13 01:02:22.175827 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:02:22.175832 | orchestrator | 2026-01-13 01:02:22.175837 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-01-13 01:02:22.175845 | orchestrator | Tuesday 13 January 2026 01:01:04 +0000 (0:00:03.186) 0:00:36.807 ******* 2026-01-13 01:02:22.175852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.175861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.175871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.175878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.175891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.175897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.175903 | orchestrator | 2026-01-13 01:02:22.175912 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-01-13 01:02:22.175918 | orchestrator | Tuesday 13 January 2026 01:01:05 +0000 (0:00:01.865) 0:00:38.673 ******* 2026-01-13 01:02:22.175923 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:22.175928 | orchestrator | 2026-01-13 01:02:22.175934 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-01-13 01:02:22.175939 | orchestrator | Tuesday 13 January 2026 01:01:06 +0000 (0:00:00.125) 0:00:38.798 ******* 2026-01-13 01:02:22.176039 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:22.176049 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:22.176054 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:22.176079 | orchestrator | 2026-01-13 01:02:22.176086 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-01-13 01:02:22.176091 | orchestrator | Tuesday 13 January 2026 01:01:06 +0000 (0:00:00.507) 0:00:39.305 ******* 2026-01-13 01:02:22.176097 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 01:02:22.176102 | orchestrator | 2026-01-13 01:02:22.176107 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-01-13 01:02:22.176113 | orchestrator | Tuesday 13 January 2026 01:01:07 +0000 (0:00:00.884) 0:00:40.191 ******* 2026-01-13 01:02:22.176119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176211 | orchestrator | 2026-01-13 01:02:22.176216 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-01-13 01:02:22.176222 | orchestrator | Tuesday 13 January 2026 01:01:09 +0000 (0:00:02.464) 0:00:42.655 ******* 2026-01-13 01:02:22.176228 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:02:22.176233 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:02:22.176239 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:02:22.176244 | orchestrator | 2026-01-13 01:02:22.176250 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-13 01:02:22.176256 | orchestrator | Tuesday 13 January 2026 01:01:10 +0000 (0:00:00.289) 0:00:42.945 ******* 2026-01-13 01:02:22.176262 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:02:22.176267 | orchestrator | 2026-01-13 01:02:22.176273 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-01-13 01:02:22.176279 | orchestrator | Tuesday 13 January 2026 01:01:11 +0000 (0:00:00.892) 0:00:43.837 ******* 2026-01-13 01:02:22.176293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176345 | orchestrator | 2026-01-13 01:02:22.176353 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-01-13 01:02:22.176359 | orchestrator | Tuesday 13 January 2026 01:01:13 +0000 (0:00:02.435) 0:00:46.273 ******* 2026-01-13 01:02:22.176365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-13 01:02:22.176371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:02:22.176377 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:22.176383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-13 01:02:22.176388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:02:22.176394 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:22.176406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-13 01:02:22.176415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:02:22.176421 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:22.176427 | orchestrator | 2026-01-13 01:02:22.176432 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-01-13 01:02:22.176438 | orchestrator | Tuesday 13 January 2026 01:01:14 +0000 (0:00:01.132) 0:00:47.406 ******* 2026-01-13 01:02:22.176443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-13 01:02:22.176449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:02:22.176455 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:22.176624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-13 01:02:22.176667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:02:22.176674 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:22.176679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-13 01:02:22.176685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:02:22.176690 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:22.176695 | orchestrator | 2026-01-13 01:02:22.176701 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-01-13 01:02:22.176706 | orchestrator | Tuesday 13 January 2026 01:01:16 +0000 (0:00:01.693) 0:00:49.099 ******* 2026-01-13 01:02:22.176711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176757 | orchestrator | 2026-01-13 01:02:22.176763 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-01-13 01:02:22.176768 | orchestrator | Tuesday 13 January 2026 01:01:19 +0000 (0:00:02.940) 0:00:52.040 ******* 2026-01-13 01:02:22.176779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176818 | orchestrator | 2026-01-13 01:02:22.176825 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-01-13 01:02:22.176831 | orchestrator | Tuesday 13 January 2026 01:01:28 +0000 (0:00:09.650) 0:01:01.691 ******* 2026-01-13 01:02:22.176836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-13 01:02:22.176842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-13 01:02:22.176847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:02:22.176856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:02:22.176862 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:22.176867 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:22.176878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-13 01:02:22.176885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:02:22.176890 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:22.176896 | orchestrator | 2026-01-13 01:02:22.176902 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-01-13 01:02:22.176907 | orchestrator | Tuesday 13 January 2026 01:01:30 +0000 (0:00:01.406) 0:01:03.098 ******* 2026-01-13 01:02:22.176913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-13 01:02:22.176966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:02:22.176988 | orchestrator | 2026-01-13 01:02:22.176994 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-13 01:02:22.176999 | orchestrator | Tuesday 13 January 2026 01:01:35 +0000 (0:00:04.684) 0:01:07.782 ******* 2026-01-13 01:02:22.177004 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:22.177009 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:22.177015 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:22.177020 | orchestrator | 2026-01-13 01:02:22.177025 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-01-13 01:02:22.177030 | orchestrator | Tuesday 13 January 2026 01:01:36 +0000 (0:00:00.931) 0:01:08.714 ******* 2026-01-13 01:02:22.177035 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:02:22.177041 | orchestrator | 2026-01-13 01:02:22.177046 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-01-13 01:02:22.177051 | orchestrator | Tuesday 13 January 2026 01:01:38 +0000 (0:00:02.274) 0:01:10.989 ******* 2026-01-13 01:02:22.177056 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:02:22.177061 | orchestrator | 2026-01-13 01:02:22.177066 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-01-13 01:02:22.177071 | orchestrator | Tuesday 13 January 2026 01:01:40 +0000 (0:00:02.210) 0:01:13.199 ******* 2026-01-13 01:02:22.177076 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:02:22.177081 | orchestrator | 2026-01-13 01:02:22.177086 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-13 01:02:22.177092 | orchestrator | Tuesday 13 January 2026 01:01:54 +0000 (0:00:13.954) 0:01:27.154 ******* 2026-01-13 01:02:22.177097 | orchestrator | 2026-01-13 01:02:22.177102 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-13 01:02:22.177107 | orchestrator | Tuesday 13 January 2026 01:01:54 +0000 (0:00:00.078) 0:01:27.232 ******* 2026-01-13 01:02:22.177112 | orchestrator | 2026-01-13 01:02:22.177117 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-13 01:02:22.177123 | orchestrator | Tuesday 13 January 2026 01:01:54 +0000 (0:00:00.069) 0:01:27.302 ******* 2026-01-13 01:02:22.177128 | orchestrator | 2026-01-13 01:02:22.177133 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-01-13 01:02:22.177138 | orchestrator | Tuesday 13 January 2026 01:01:54 +0000 (0:00:00.072) 0:01:27.374 ******* 2026-01-13 01:02:22.177143 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:02:22.177148 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:02:22.177153 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:02:22.177158 | orchestrator | 2026-01-13 01:02:22.177163 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-01-13 01:02:22.177171 | orchestrator | Tuesday 13 January 2026 01:02:09 +0000 (0:00:14.827) 0:01:42.201 ******* 2026-01-13 01:02:22.177176 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:02:22.177182 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:02:22.177186 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:02:22.177191 | orchestrator | 2026-01-13 01:02:22.177196 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:02:22.177203 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-13 01:02:22.177209 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-13 01:02:22.177214 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-13 01:02:22.177219 | orchestrator | 2026-01-13 01:02:22.177225 | orchestrator | 2026-01-13 01:02:22.177228 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:02:22.177232 | orchestrator | Tuesday 13 January 2026 01:02:19 +0000 (0:00:09.627) 0:01:51.829 ******* 2026-01-13 01:02:22.177238 | orchestrator | =============================================================================== 2026-01-13 01:02:22.177241 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.83s 2026-01-13 01:02:22.177244 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 13.95s 2026-01-13 01:02:22.177248 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 9.65s 2026-01-13 01:02:22.177252 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.63s 2026-01-13 01:02:22.177255 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.76s 2026-01-13 01:02:22.177259 | orchestrator | magnum : Check magnum containers ---------------------------------------- 4.68s 2026-01-13 01:02:22.177262 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.98s 2026-01-13 01:02:22.177266 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.96s 2026-01-13 01:02:22.177270 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.89s 2026-01-13 01:02:22.177273 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.62s 2026-01-13 01:02:22.177277 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.25s 2026-01-13 01:02:22.177281 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.23s 2026-01-13 01:02:22.177284 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.19s 2026-01-13 01:02:22.177288 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.07s 2026-01-13 01:02:22.177291 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.94s 2026-01-13 01:02:22.177296 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.46s 2026-01-13 01:02:22.177302 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.44s 2026-01-13 01:02:22.177307 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.27s 2026-01-13 01:02:22.177312 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.21s 2026-01-13 01:02:22.177318 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.87s 2026-01-13 01:02:22.177323 | orchestrator | 2026-01-13 01:02:22 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:22.177328 | orchestrator | 2026-01-13 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:25.208067 | orchestrator | 2026-01-13 01:02:25 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:25.208711 | orchestrator | 2026-01-13 01:02:25 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:25.209656 | orchestrator | 2026-01-13 01:02:25 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:02:25.210426 | orchestrator | 2026-01-13 01:02:25 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:25.210590 | orchestrator | 2026-01-13 01:02:25 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:28.251293 | orchestrator | 2026-01-13 01:02:28 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:28.255003 | orchestrator | 2026-01-13 01:02:28 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:28.257347 | orchestrator | 2026-01-13 01:02:28 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:02:28.259597 | orchestrator | 2026-01-13 01:02:28 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:28.260264 | orchestrator | 2026-01-13 01:02:28 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:31.292294 | orchestrator | 2026-01-13 01:02:31 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:31.292371 | orchestrator | 2026-01-13 01:02:31 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:31.292937 | orchestrator | 2026-01-13 01:02:31 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:02:31.297219 | orchestrator | 2026-01-13 01:02:31 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:31.297275 | orchestrator | 2026-01-13 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:34.340598 | orchestrator | 2026-01-13 01:02:34 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:34.342882 | orchestrator | 2026-01-13 01:02:34 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:34.345283 | orchestrator | 2026-01-13 01:02:34 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:02:34.346439 | orchestrator | 2026-01-13 01:02:34 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:34.346479 | orchestrator | 2026-01-13 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:37.392807 | orchestrator | 2026-01-13 01:02:37 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:37.394143 | orchestrator | 2026-01-13 01:02:37 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:37.396259 | orchestrator | 2026-01-13 01:02:37 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:02:37.397879 | orchestrator | 2026-01-13 01:02:37 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:37.398064 | orchestrator | 2026-01-13 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:40.438274 | orchestrator | 2026-01-13 01:02:40 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:40.438994 | orchestrator | 2026-01-13 01:02:40 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:40.442761 | orchestrator | 2026-01-13 01:02:40 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:02:40.447163 | orchestrator | 2026-01-13 01:02:40 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:40.448108 | orchestrator | 2026-01-13 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:43.496421 | orchestrator | 2026-01-13 01:02:43 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state STARTED 2026-01-13 01:02:43.499664 | orchestrator | 2026-01-13 01:02:43 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:43.502490 | orchestrator | 2026-01-13 01:02:43 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:02:43.504354 | orchestrator | 2026-01-13 01:02:43 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:43.504390 | orchestrator | 2026-01-13 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:46.609477 | orchestrator | 2026-01-13 01:02:46.609539 | orchestrator | 2026-01-13 01:02:46.609548 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:02:46.609555 | orchestrator | 2026-01-13 01:02:46.609561 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:02:46.609568 | orchestrator | Tuesday 13 January 2026 00:58:22 +0000 (0:00:00.277) 0:00:00.277 ******* 2026-01-13 01:02:46.609575 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:02:46.609583 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:02:46.609589 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:02:46.609611 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:02:46.609617 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:02:46.609624 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:02:46.609630 | orchestrator | 2026-01-13 01:02:46.609636 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:02:46.609642 | orchestrator | Tuesday 13 January 2026 00:58:23 +0000 (0:00:01.002) 0:00:01.280 ******* 2026-01-13 01:02:46.609648 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-13 01:02:46.609655 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-13 01:02:46.609662 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-13 01:02:46.609669 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-13 01:02:46.609675 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-13 01:02:46.609681 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-13 01:02:46.609687 | orchestrator | 2026-01-13 01:02:46.609693 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-13 01:02:46.609698 | orchestrator | 2026-01-13 01:02:46.609704 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-13 01:02:46.609710 | orchestrator | Tuesday 13 January 2026 00:58:23 +0000 (0:00:00.777) 0:00:02.058 ******* 2026-01-13 01:02:46.609716 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 01:02:46.609723 | orchestrator | 2026-01-13 01:02:46.609728 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-13 01:02:46.609735 | orchestrator | Tuesday 13 January 2026 00:58:25 +0000 (0:00:01.060) 0:00:03.118 ******* 2026-01-13 01:02:46.609741 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:02:46.609746 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:02:46.609789 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:02:46.609796 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:02:46.609802 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:02:46.609816 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:02:46.609823 | orchestrator | 2026-01-13 01:02:46.609830 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-13 01:02:46.609837 | orchestrator | Tuesday 13 January 2026 00:58:26 +0000 (0:00:01.212) 0:00:04.331 ******* 2026-01-13 01:02:46.609843 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:02:46.609850 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:02:46.609856 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:02:46.609863 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:02:46.609869 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:02:46.609875 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:02:46.609881 | orchestrator | 2026-01-13 01:02:46.609888 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-13 01:02:46.609894 | orchestrator | Tuesday 13 January 2026 00:58:27 +0000 (0:00:01.132) 0:00:05.463 ******* 2026-01-13 01:02:46.609919 | orchestrator | ok: [testbed-node-0] => { 2026-01-13 01:02:46.609925 | orchestrator |  "changed": false, 2026-01-13 01:02:46.609953 | orchestrator |  "msg": "All assertions passed" 2026-01-13 01:02:46.609961 | orchestrator | } 2026-01-13 01:02:46.609980 | orchestrator | ok: [testbed-node-1] => { 2026-01-13 01:02:46.609987 | orchestrator |  "changed": false, 2026-01-13 01:02:46.610005 | orchestrator |  "msg": "All assertions passed" 2026-01-13 01:02:46.610058 | orchestrator | } 2026-01-13 01:02:46.610067 | orchestrator | ok: [testbed-node-2] => { 2026-01-13 01:02:46.610073 | orchestrator |  "changed": false, 2026-01-13 01:02:46.610080 | orchestrator |  "msg": "All assertions passed" 2026-01-13 01:02:46.610087 | orchestrator | } 2026-01-13 01:02:46.610093 | orchestrator | ok: [testbed-node-3] => { 2026-01-13 01:02:46.610100 | orchestrator |  "changed": false, 2026-01-13 01:02:46.610107 | orchestrator |  "msg": "All assertions passed" 2026-01-13 01:02:46.610114 | orchestrator | } 2026-01-13 01:02:46.610120 | orchestrator | ok: [testbed-node-4] => { 2026-01-13 01:02:46.610134 | orchestrator |  "changed": false, 2026-01-13 01:02:46.610141 | orchestrator |  "msg": "All assertions passed" 2026-01-13 01:02:46.610147 | orchestrator | } 2026-01-13 01:02:46.610154 | orchestrator | ok: [testbed-node-5] => { 2026-01-13 01:02:46.610161 | orchestrator |  "changed": false, 2026-01-13 01:02:46.610168 | orchestrator |  "msg": "All assertions passed" 2026-01-13 01:02:46.610175 | orchestrator | } 2026-01-13 01:02:46.610181 | orchestrator | 2026-01-13 01:02:46.610188 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-13 01:02:46.610195 | orchestrator | Tuesday 13 January 2026 00:58:28 +0000 (0:00:00.807) 0:00:06.270 ******* 2026-01-13 01:02:46.610202 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.610209 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.610215 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.610222 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.610228 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.610235 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.610242 | orchestrator | 2026-01-13 01:02:46.610249 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-13 01:02:46.610256 | orchestrator | Tuesday 13 January 2026 00:58:28 +0000 (0:00:00.554) 0:00:06.825 ******* 2026-01-13 01:02:46.610262 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-01-13 01:02:46.610269 | orchestrator | 2026-01-13 01:02:46.610275 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-01-13 01:02:46.610281 | orchestrator | Tuesday 13 January 2026 00:58:32 +0000 (0:00:03.627) 0:00:10.453 ******* 2026-01-13 01:02:46.610288 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-01-13 01:02:46.610295 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-01-13 01:02:46.610302 | orchestrator | 2026-01-13 01:02:46.610375 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-01-13 01:02:46.610384 | orchestrator | Tuesday 13 January 2026 00:58:39 +0000 (0:00:07.329) 0:00:17.782 ******* 2026-01-13 01:02:46.610391 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-13 01:02:46.610397 | orchestrator | 2026-01-13 01:02:46.610404 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-01-13 01:02:46.610411 | orchestrator | Tuesday 13 January 2026 00:58:43 +0000 (0:00:03.512) 0:00:21.295 ******* 2026-01-13 01:02:46.610418 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-13 01:02:46.610424 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-01-13 01:02:46.610430 | orchestrator | 2026-01-13 01:02:46.610437 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-01-13 01:02:46.610443 | orchestrator | Tuesday 13 January 2026 00:58:47 +0000 (0:00:04.422) 0:00:25.717 ******* 2026-01-13 01:02:46.610450 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-13 01:02:46.610456 | orchestrator | 2026-01-13 01:02:46.610463 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-01-13 01:02:46.610469 | orchestrator | Tuesday 13 January 2026 00:58:51 +0000 (0:00:03.684) 0:00:29.401 ******* 2026-01-13 01:02:46.610476 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-01-13 01:02:46.610483 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-01-13 01:02:46.610491 | orchestrator | 2026-01-13 01:02:46.610497 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-13 01:02:46.610504 | orchestrator | Tuesday 13 January 2026 00:58:59 +0000 (0:00:07.706) 0:00:37.108 ******* 2026-01-13 01:02:46.610511 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.610518 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.610524 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.610531 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.610537 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.610549 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.610557 | orchestrator | 2026-01-13 01:02:46.610564 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-01-13 01:02:46.610571 | orchestrator | Tuesday 13 January 2026 00:58:59 +0000 (0:00:00.821) 0:00:37.930 ******* 2026-01-13 01:02:46.610578 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.610585 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.610592 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.610599 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.610605 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.610617 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.610624 | orchestrator | 2026-01-13 01:02:46.610631 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-01-13 01:02:46.610638 | orchestrator | Tuesday 13 January 2026 00:59:02 +0000 (0:00:02.417) 0:00:40.347 ******* 2026-01-13 01:02:46.610645 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:02:46.610652 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:02:46.610660 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:02:46.610667 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:02:46.610673 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:02:46.610680 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:02:46.610687 | orchestrator | 2026-01-13 01:02:46.610694 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-13 01:02:46.610701 | orchestrator | Tuesday 13 January 2026 00:59:04 +0000 (0:00:02.052) 0:00:42.399 ******* 2026-01-13 01:02:46.610708 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.610715 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.610722 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.610729 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.610736 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.610743 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.610750 | orchestrator | 2026-01-13 01:02:46.610757 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-01-13 01:02:46.610764 | orchestrator | Tuesday 13 January 2026 00:59:07 +0000 (0:00:03.364) 0:00:45.764 ******* 2026-01-13 01:02:46.610773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.610790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.610802 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.610813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.610820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.610828 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.610834 | orchestrator | 2026-01-13 01:02:46.610841 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-01-13 01:02:46.610847 | orchestrator | Tuesday 13 January 2026 00:59:11 +0000 (0:00:03.692) 0:00:49.456 ******* 2026-01-13 01:02:46.610853 | orchestrator | [WARNING]: Skipped 2026-01-13 01:02:46.610859 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-01-13 01:02:46.610866 | orchestrator | due to this access issue: 2026-01-13 01:02:46.610873 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-01-13 01:02:46.610880 | orchestrator | a directory 2026-01-13 01:02:46.610887 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 01:02:46.610894 | orchestrator | 2026-01-13 01:02:46.610901 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-13 01:02:46.610915 | orchestrator | Tuesday 13 January 2026 00:59:12 +0000 (0:00:00.753) 0:00:50.210 ******* 2026-01-13 01:02:46.610923 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 01:02:46.610930 | orchestrator | 2026-01-13 01:02:46.610937 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-01-13 01:02:46.610961 | orchestrator | Tuesday 13 January 2026 00:59:13 +0000 (0:00:00.893) 0:00:51.103 ******* 2026-01-13 01:02:46.610968 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.610978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.610985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.610991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.611006 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.611013 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.611019 | orchestrator | 2026-01-13 01:02:46.611026 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-01-13 01:02:46.611032 | orchestrator | Tuesday 13 January 2026 00:59:15 +0000 (0:00:02.780) 0:00:53.884 ******* 2026-01-13 01:02:46.611041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.611048 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.611054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.611060 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.611066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611075 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.611085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611091 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.611098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.611104 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.611114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611121 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.611127 | orchestrator | 2026-01-13 01:02:46.611133 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-01-13 01:02:46.611140 | orchestrator | Tuesday 13 January 2026 00:59:18 +0000 (0:00:03.172) 0:00:57.056 ******* 2026-01-13 01:02:46.611147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.611159 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.611172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.611179 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.611187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.611194 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.611207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611215 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.611222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611229 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.611236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611248 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.611255 | orchestrator | 2026-01-13 01:02:46.611262 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-01-13 01:02:46.611269 | orchestrator | Tuesday 13 January 2026 00:59:21 +0000 (0:00:02.473) 0:00:59.529 ******* 2026-01-13 01:02:46.611275 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.611282 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.611289 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.611295 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.611302 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.611309 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.611316 | orchestrator | 2026-01-13 01:02:46.611323 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-01-13 01:02:46.611334 | orchestrator | Tuesday 13 January 2026 00:59:24 +0000 (0:00:02.654) 0:01:02.184 ******* 2026-01-13 01:02:46.611341 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.611348 | orchestrator | 2026-01-13 01:02:46.611355 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-01-13 01:02:46.611362 | orchestrator | Tuesday 13 January 2026 00:59:24 +0000 (0:00:00.181) 0:01:02.365 ******* 2026-01-13 01:02:46.611369 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.611376 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.611383 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.611390 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.611397 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.611404 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.611411 | orchestrator | 2026-01-13 01:02:46.611419 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-01-13 01:02:46.611425 | orchestrator | Tuesday 13 January 2026 00:59:25 +0000 (0:00:00.733) 0:01:03.099 ******* 2026-01-13 01:02:46.611432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.611439 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.611449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.611461 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.611468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.611475 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.611487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_po2026-01-13 01:02:46 | INFO  | Task a1dbb496-82c1-46c3-a715-dffbee1169f4 is in state SUCCESS 2026-01-13 01:02:46.611495 | orchestrator | rt neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611502 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.611509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611517 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.611526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611537 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.611543 | orchestrator | 2026-01-13 01:02:46.611550 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-01-13 01:02:46.611556 | orchestrator | Tuesday 13 January 2026 00:59:28 +0000 (0:00:03.295) 0:01:06.395 ******* 2026-01-13 01:02:46.611564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.611571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.611583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.611591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.611601 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.611612 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.611619 | orchestrator | 2026-01-13 01:02:46.611627 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-01-13 01:02:46.611633 | orchestrator | Tuesday 13 January 2026 00:59:32 +0000 (0:00:04.234) 0:01:10.629 ******* 2026-01-13 01:02:46.611641 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.611653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.611661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.611675 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.611683 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.611691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.611697 | orchestrator | 2026-01-13 01:02:46.611704 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-01-13 01:02:46.611710 | orchestrator | Tuesday 13 January 2026 00:59:39 +0000 (0:00:06.942) 0:01:17.571 ******* 2026-01-13 01:02:46.611722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.611729 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.611737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.611748 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.611758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.611765 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.611773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611780 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.611792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611799 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.611806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611818 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.611825 | orchestrator | 2026-01-13 01:02:46.611832 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-01-13 01:02:46.611839 | orchestrator | Tuesday 13 January 2026 00:59:42 +0000 (0:00:02.536) 0:01:20.108 ******* 2026-01-13 01:02:46.611846 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.611853 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.611860 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.611867 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:02:46.611874 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:02:46.611881 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:02:46.611888 | orchestrator | 2026-01-13 01:02:46.611895 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-01-13 01:02:46.611902 | orchestrator | Tuesday 13 January 2026 00:59:44 +0000 (0:00:02.896) 0:01:23.005 ******* 2026-01-13 01:02:46.611912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611920 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.611927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611935 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.611988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.611997 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.612010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.612022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.612036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.612044 | orchestrator | 2026-01-13 01:02:46.612051 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-01-13 01:02:46.612058 | orchestrator | Tuesday 13 January 2026 00:59:49 +0000 (0:00:04.323) 0:01:27.328 ******* 2026-01-13 01:02:46.612065 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.612072 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.612079 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.612085 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.612092 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.612099 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.612106 | orchestrator | 2026-01-13 01:02:46.612113 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-01-13 01:02:46.612120 | orchestrator | Tuesday 13 January 2026 00:59:52 +0000 (0:00:02.794) 0:01:30.122 ******* 2026-01-13 01:02:46.612127 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.612134 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.612141 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.612148 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.612155 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.612162 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.612169 | orchestrator | 2026-01-13 01:02:46.612176 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-01-13 01:02:46.612183 | orchestrator | Tuesday 13 January 2026 00:59:54 +0000 (0:00:02.490) 0:01:32.613 ******* 2026-01-13 01:02:46.612190 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.612197 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.612205 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.612211 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.612218 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.612225 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.612237 | orchestrator | 2026-01-13 01:02:46.612244 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-01-13 01:02:46.612251 | orchestrator | Tuesday 13 January 2026 00:59:57 +0000 (0:00:02.603) 0:01:35.216 ******* 2026-01-13 01:02:46.612258 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.612265 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.612272 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.612279 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.612286 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.612293 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.612300 | orchestrator | 2026-01-13 01:02:46.612307 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-01-13 01:02:46.612314 | orchestrator | Tuesday 13 January 2026 00:59:58 +0000 (0:00:01.776) 0:01:36.993 ******* 2026-01-13 01:02:46.612321 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.612332 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.612340 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.612347 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.612354 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.612360 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.612367 | orchestrator | 2026-01-13 01:02:46.612374 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-01-13 01:02:46.612381 | orchestrator | Tuesday 13 January 2026 01:00:01 +0000 (0:00:02.683) 0:01:39.677 ******* 2026-01-13 01:02:46.612388 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.612395 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.612402 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.612409 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.612416 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.612423 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.612430 | orchestrator | 2026-01-13 01:02:46.612437 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-01-13 01:02:46.612444 | orchestrator | Tuesday 13 January 2026 01:00:03 +0000 (0:00:02.233) 0:01:41.910 ******* 2026-01-13 01:02:46.612451 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-13 01:02:46.612458 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.612465 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-13 01:02:46.612472 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.612479 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-13 01:02:46.612486 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.612493 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-13 01:02:46.612500 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.612507 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-13 01:02:46.612515 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.612522 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-13 01:02:46.612529 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.612535 | orchestrator | 2026-01-13 01:02:46.612541 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-01-13 01:02:46.612551 | orchestrator | Tuesday 13 January 2026 01:00:05 +0000 (0:00:02.159) 0:01:44.070 ******* 2026-01-13 01:02:46.612558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.612568 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.612574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.612580 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.612592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.612599 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.612607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.612614 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.612624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.612635 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.612642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.612649 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.612657 | orchestrator | 2026-01-13 01:02:46.612664 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-01-13 01:02:46.612671 | orchestrator | Tuesday 13 January 2026 01:00:08 +0000 (0:00:02.912) 0:01:46.983 ******* 2026-01-13 01:02:46.612678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.612685 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.612866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 2026-01-13 01:02:46 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:46.612878 | orchestrator | 2026-01-13 01:02:46 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:02:46.612884 | orchestrator | 2026-01-13 01:02:46 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:46.612891 | orchestrator | 2026-01-13 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:46.612896 | orchestrator | '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.612903 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.612912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.612923 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.612929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.612934 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.612970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.612978 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.612991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.612997 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.613003 | orchestrator | 2026-01-13 01:02:46.613010 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-01-13 01:02:46.613017 | orchestrator | Tuesday 13 January 2026 01:00:11 +0000 (0:00:02.437) 0:01:49.421 ******* 2026-01-13 01:02:46.613023 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.613029 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.613036 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.613042 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.613049 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.613056 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.613062 | orchestrator | 2026-01-13 01:02:46.613069 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-01-13 01:02:46.613076 | orchestrator | Tuesday 13 January 2026 01:00:13 +0000 (0:00:02.472) 0:01:51.893 ******* 2026-01-13 01:02:46.613083 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.613096 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.613103 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.613110 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:02:46.613117 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:02:46.613124 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:02:46.613131 | orchestrator | 2026-01-13 01:02:46.613137 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-01-13 01:02:46.613144 | orchestrator | Tuesday 13 January 2026 01:00:17 +0000 (0:00:03.433) 0:01:55.326 ******* 2026-01-13 01:02:46.613151 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.613158 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.613164 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.613171 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.613179 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.613186 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.613192 | orchestrator | 2026-01-13 01:02:46.613200 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-01-13 01:02:46.613207 | orchestrator | Tuesday 13 January 2026 01:00:19 +0000 (0:00:01.857) 0:01:57.183 ******* 2026-01-13 01:02:46.613218 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.613225 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.613232 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.613239 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.613246 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.613253 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.613260 | orchestrator | 2026-01-13 01:02:46.613267 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-01-13 01:02:46.613274 | orchestrator | Tuesday 13 January 2026 01:00:21 +0000 (0:00:02.222) 0:01:59.406 ******* 2026-01-13 01:02:46.613280 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.613287 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.613294 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.613301 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.613308 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.613315 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.613323 | orchestrator | 2026-01-13 01:02:46.613330 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-01-13 01:02:46.613337 | orchestrator | Tuesday 13 January 2026 01:00:24 +0000 (0:00:03.220) 0:02:02.626 ******* 2026-01-13 01:02:46.613344 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.613351 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.613358 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.613365 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.613372 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.613379 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.613386 | orchestrator | 2026-01-13 01:02:46.613393 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-01-13 01:02:46.613400 | orchestrator | Tuesday 13 January 2026 01:00:26 +0000 (0:00:02.128) 0:02:04.755 ******* 2026-01-13 01:02:46.613407 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.613414 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.613421 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.613428 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.613435 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.613442 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.613448 | orchestrator | 2026-01-13 01:02:46.613455 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-01-13 01:02:46.613462 | orchestrator | Tuesday 13 January 2026 01:00:28 +0000 (0:00:02.031) 0:02:06.787 ******* 2026-01-13 01:02:46.613469 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.613476 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.613483 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.613494 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.613502 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.613509 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.613516 | orchestrator | 2026-01-13 01:02:46.613524 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-01-13 01:02:46.613531 | orchestrator | Tuesday 13 January 2026 01:00:30 +0000 (0:00:01.902) 0:02:08.689 ******* 2026-01-13 01:02:46.613538 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.613544 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.613550 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.613558 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.613565 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.613573 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.613580 | orchestrator | 2026-01-13 01:02:46.613588 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-01-13 01:02:46.613596 | orchestrator | Tuesday 13 January 2026 01:00:32 +0000 (0:00:01.826) 0:02:10.515 ******* 2026-01-13 01:02:46.613603 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-13 01:02:46.613611 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.613624 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-13 01:02:46.613632 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.613639 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-13 01:02:46.613647 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.613654 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-13 01:02:46.613662 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-13 01:02:46.613670 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.613677 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.613685 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-13 01:02:46.613693 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.613699 | orchestrator | 2026-01-13 01:02:46.613717 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-01-13 01:02:46.613732 | orchestrator | Tuesday 13 January 2026 01:00:35 +0000 (0:00:02.715) 0:02:13.231 ******* 2026-01-13 01:02:46.613744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.613753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.613764 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.613772 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.613780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-13 01:02:46.613788 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.613801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.613810 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.613818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.613829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-13 01:02:46.613837 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.613845 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.613852 | orchestrator | 2026-01-13 01:02:46.613860 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-01-13 01:02:46.613872 | orchestrator | Tuesday 13 January 2026 01:00:37 +0000 (0:00:02.798) 0:02:16.030 ******* 2026-01-13 01:02:46.613880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.613889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.613902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.613911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-13 01:02:46.613923 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.613935 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-13 01:02:46.613954 | orchestrator | 2026-01-13 01:02:46.613962 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-13 01:02:46.613968 | orchestrator | Tuesday 13 January 2026 01:00:40 +0000 (0:00:02.568) 0:02:18.598 ******* 2026-01-13 01:02:46.613975 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:02:46.613982 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:02:46.613989 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:02:46.613996 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:02:46.614002 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:02:46.614009 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:02:46.614043 | orchestrator | 2026-01-13 01:02:46.614051 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-01-13 01:02:46.614059 | orchestrator | Tuesday 13 January 2026 01:00:40 +0000 (0:00:00.458) 0:02:19.056 ******* 2026-01-13 01:02:46.614066 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:02:46.614074 | orchestrator | 2026-01-13 01:02:46.614081 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-01-13 01:02:46.614087 | orchestrator | Tuesday 13 January 2026 01:00:42 +0000 (0:00:01.974) 0:02:21.031 ******* 2026-01-13 01:02:46.614095 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:02:46.614102 | orchestrator | 2026-01-13 01:02:46.614109 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-01-13 01:02:46.614117 | orchestrator | Tuesday 13 January 2026 01:00:45 +0000 (0:00:02.304) 0:02:23.335 ******* 2026-01-13 01:02:46.614124 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:02:46.614131 | orchestrator | 2026-01-13 01:02:46.614138 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-13 01:02:46.614145 | orchestrator | Tuesday 13 January 2026 01:01:24 +0000 (0:00:39.430) 0:03:02.766 ******* 2026-01-13 01:02:46.614153 | orchestrator | 2026-01-13 01:02:46.614160 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-13 01:02:46.614172 | orchestrator | Tuesday 13 January 2026 01:01:24 +0000 (0:00:00.185) 0:03:02.952 ******* 2026-01-13 01:02:46.614180 | orchestrator | 2026-01-13 01:02:46.614187 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-13 01:02:46.614195 | orchestrator | Tuesday 13 January 2026 01:01:25 +0000 (0:00:00.237) 0:03:03.190 ******* 2026-01-13 01:02:46.614202 | orchestrator | 2026-01-13 01:02:46.614210 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-13 01:02:46.614217 | orchestrator | Tuesday 13 January 2026 01:01:25 +0000 (0:00:00.130) 0:03:03.321 ******* 2026-01-13 01:02:46.614224 | orchestrator | 2026-01-13 01:02:46.614231 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-13 01:02:46.614239 | orchestrator | Tuesday 13 January 2026 01:01:25 +0000 (0:00:00.055) 0:03:03.376 ******* 2026-01-13 01:02:46.614246 | orchestrator | 2026-01-13 01:02:46.614254 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-13 01:02:46.614266 | orchestrator | Tuesday 13 January 2026 01:01:25 +0000 (0:00:00.099) 0:03:03.476 ******* 2026-01-13 01:02:46.614275 | orchestrator | 2026-01-13 01:02:46.614282 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-01-13 01:02:46.614290 | orchestrator | Tuesday 13 January 2026 01:01:25 +0000 (0:00:00.140) 0:03:03.617 ******* 2026-01-13 01:02:46.614297 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:02:46.614305 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:02:46.614312 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:02:46.614319 | orchestrator | 2026-01-13 01:02:46.614327 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-01-13 01:02:46.614334 | orchestrator | Tuesday 13 January 2026 01:01:53 +0000 (0:00:28.430) 0:03:32.047 ******* 2026-01-13 01:02:46.614342 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:02:46.614349 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:02:46.614356 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:02:46.614363 | orchestrator | 2026-01-13 01:02:46.614371 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:02:46.614379 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-13 01:02:46.614423 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-13 01:02:46.614432 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-13 01:02:46.614440 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-13 01:02:46.614448 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-13 01:02:46.614455 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-13 01:02:46.614463 | orchestrator | 2026-01-13 01:02:46.614470 | orchestrator | 2026-01-13 01:02:46.614478 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:02:46.614485 | orchestrator | Tuesday 13 January 2026 01:02:45 +0000 (0:00:51.904) 0:04:23.951 ******* 2026-01-13 01:02:46.614493 | orchestrator | =============================================================================== 2026-01-13 01:02:46.614501 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 51.90s 2026-01-13 01:02:46.614508 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.43s 2026-01-13 01:02:46.614516 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.43s 2026-01-13 01:02:46.614523 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.71s 2026-01-13 01:02:46.614531 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.33s 2026-01-13 01:02:46.614537 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.94s 2026-01-13 01:02:46.614544 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.42s 2026-01-13 01:02:46.614550 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.32s 2026-01-13 01:02:46.614557 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.24s 2026-01-13 01:02:46.614563 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.69s 2026-01-13 01:02:46.614568 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.68s 2026-01-13 01:02:46.614574 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.63s 2026-01-13 01:02:46.614579 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.51s 2026-01-13 01:02:46.614590 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.43s 2026-01-13 01:02:46.614597 | orchestrator | Setting sysctl values --------------------------------------------------- 3.36s 2026-01-13 01:02:46.614604 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.30s 2026-01-13 01:02:46.614611 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.22s 2026-01-13 01:02:46.614618 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.17s 2026-01-13 01:02:46.614630 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 2.91s 2026-01-13 01:02:46.614637 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.90s 2026-01-13 01:02:49.630702 | orchestrator | 2026-01-13 01:02:49 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:02:49.634367 | orchestrator | 2026-01-13 01:02:49 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:49.634703 | orchestrator | 2026-01-13 01:02:49 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:02:49.635288 | orchestrator | 2026-01-13 01:02:49 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:49.635302 | orchestrator | 2026-01-13 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:52.669167 | orchestrator | 2026-01-13 01:02:52 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:02:52.669227 | orchestrator | 2026-01-13 01:02:52 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:52.669233 | orchestrator | 2026-01-13 01:02:52 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:02:52.669237 | orchestrator | 2026-01-13 01:02:52 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:52.669241 | orchestrator | 2026-01-13 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:55.694745 | orchestrator | 2026-01-13 01:02:55 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:02:55.694854 | orchestrator | 2026-01-13 01:02:55 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:55.695452 | orchestrator | 2026-01-13 01:02:55 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:02:55.696060 | orchestrator | 2026-01-13 01:02:55 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:55.696082 | orchestrator | 2026-01-13 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:02:58.716276 | orchestrator | 2026-01-13 01:02:58 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:02:58.716339 | orchestrator | 2026-01-13 01:02:58 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:02:58.716348 | orchestrator | 2026-01-13 01:02:58 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:02:58.716778 | orchestrator | 2026-01-13 01:02:58 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:02:58.716804 | orchestrator | 2026-01-13 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:01.742087 | orchestrator | 2026-01-13 01:03:01 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:01.742433 | orchestrator | 2026-01-13 01:03:01 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:01.743089 | orchestrator | 2026-01-13 01:03:01 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:01.744131 | orchestrator | 2026-01-13 01:03:01 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:01.744156 | orchestrator | 2026-01-13 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:04.780203 | orchestrator | 2026-01-13 01:03:04 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:04.781734 | orchestrator | 2026-01-13 01:03:04 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:04.782297 | orchestrator | 2026-01-13 01:03:04 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:04.782930 | orchestrator | 2026-01-13 01:03:04 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:04.782961 | orchestrator | 2026-01-13 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:07.813011 | orchestrator | 2026-01-13 01:03:07 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:07.813527 | orchestrator | 2026-01-13 01:03:07 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:07.814935 | orchestrator | 2026-01-13 01:03:07 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:07.818229 | orchestrator | 2026-01-13 01:03:07 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:07.818279 | orchestrator | 2026-01-13 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:10.878548 | orchestrator | 2026-01-13 01:03:10 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:10.880828 | orchestrator | 2026-01-13 01:03:10 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:10.882751 | orchestrator | 2026-01-13 01:03:10 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:10.884309 | orchestrator | 2026-01-13 01:03:10 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:10.884352 | orchestrator | 2026-01-13 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:13.926274 | orchestrator | 2026-01-13 01:03:13 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:13.927340 | orchestrator | 2026-01-13 01:03:13 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:13.928187 | orchestrator | 2026-01-13 01:03:13 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:13.928722 | orchestrator | 2026-01-13 01:03:13 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:13.928840 | orchestrator | 2026-01-13 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:16.960851 | orchestrator | 2026-01-13 01:03:16 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:16.961410 | orchestrator | 2026-01-13 01:03:16 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:16.962133 | orchestrator | 2026-01-13 01:03:16 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:16.962813 | orchestrator | 2026-01-13 01:03:16 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:16.962844 | orchestrator | 2026-01-13 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:19.993415 | orchestrator | 2026-01-13 01:03:19 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:19.995854 | orchestrator | 2026-01-13 01:03:19 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:19.998693 | orchestrator | 2026-01-13 01:03:20 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:20.002183 | orchestrator | 2026-01-13 01:03:20 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:20.002306 | orchestrator | 2026-01-13 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:23.026588 | orchestrator | 2026-01-13 01:03:23 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:23.027015 | orchestrator | 2026-01-13 01:03:23 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:23.027664 | orchestrator | 2026-01-13 01:03:23 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:23.028538 | orchestrator | 2026-01-13 01:03:23 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:23.028568 | orchestrator | 2026-01-13 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:26.129531 | orchestrator | 2026-01-13 01:03:26 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:26.129825 | orchestrator | 2026-01-13 01:03:26 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:26.130529 | orchestrator | 2026-01-13 01:03:26 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:26.131256 | orchestrator | 2026-01-13 01:03:26 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:26.131315 | orchestrator | 2026-01-13 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:29.161213 | orchestrator | 2026-01-13 01:03:29 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:29.161269 | orchestrator | 2026-01-13 01:03:29 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:29.161868 | orchestrator | 2026-01-13 01:03:29 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:29.162499 | orchestrator | 2026-01-13 01:03:29 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:29.162529 | orchestrator | 2026-01-13 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:32.213994 | orchestrator | 2026-01-13 01:03:32 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:32.214616 | orchestrator | 2026-01-13 01:03:32 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:32.215203 | orchestrator | 2026-01-13 01:03:32 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:32.215652 | orchestrator | 2026-01-13 01:03:32 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:32.215712 | orchestrator | 2026-01-13 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:35.241871 | orchestrator | 2026-01-13 01:03:35 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:35.243164 | orchestrator | 2026-01-13 01:03:35 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:35.243410 | orchestrator | 2026-01-13 01:03:35 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:35.243999 | orchestrator | 2026-01-13 01:03:35 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:35.244019 | orchestrator | 2026-01-13 01:03:35 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:38.285413 | orchestrator | 2026-01-13 01:03:38 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:38.285930 | orchestrator | 2026-01-13 01:03:38 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:38.286540 | orchestrator | 2026-01-13 01:03:38 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:38.287294 | orchestrator | 2026-01-13 01:03:38 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:38.287329 | orchestrator | 2026-01-13 01:03:38 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:41.317238 | orchestrator | 2026-01-13 01:03:41 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:41.319603 | orchestrator | 2026-01-13 01:03:41 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:41.320624 | orchestrator | 2026-01-13 01:03:41 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:41.322871 | orchestrator | 2026-01-13 01:03:41 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:41.322916 | orchestrator | 2026-01-13 01:03:41 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:44.357896 | orchestrator | 2026-01-13 01:03:44 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:44.360616 | orchestrator | 2026-01-13 01:03:44 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:44.361555 | orchestrator | 2026-01-13 01:03:44 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:44.363524 | orchestrator | 2026-01-13 01:03:44 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:44.363560 | orchestrator | 2026-01-13 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:47.454798 | orchestrator | 2026-01-13 01:03:47 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:47.454870 | orchestrator | 2026-01-13 01:03:47 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:47.454879 | orchestrator | 2026-01-13 01:03:47 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:47.454891 | orchestrator | 2026-01-13 01:03:47 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:47.454898 | orchestrator | 2026-01-13 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:50.535920 | orchestrator | 2026-01-13 01:03:50 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:50.536978 | orchestrator | 2026-01-13 01:03:50 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:50.538183 | orchestrator | 2026-01-13 01:03:50 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:50.539158 | orchestrator | 2026-01-13 01:03:50 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:50.539418 | orchestrator | 2026-01-13 01:03:50 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:53.584545 | orchestrator | 2026-01-13 01:03:53 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:53.585579 | orchestrator | 2026-01-13 01:03:53 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:53.587140 | orchestrator | 2026-01-13 01:03:53 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:53.588638 | orchestrator | 2026-01-13 01:03:53 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:53.588697 | orchestrator | 2026-01-13 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:56.624192 | orchestrator | 2026-01-13 01:03:56 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:56.625193 | orchestrator | 2026-01-13 01:03:56 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:56.626776 | orchestrator | 2026-01-13 01:03:56 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:56.628170 | orchestrator | 2026-01-13 01:03:56 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:56.628258 | orchestrator | 2026-01-13 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:03:59.679540 | orchestrator | 2026-01-13 01:03:59 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:03:59.682713 | orchestrator | 2026-01-13 01:03:59 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:03:59.684872 | orchestrator | 2026-01-13 01:03:59 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:03:59.687970 | orchestrator | 2026-01-13 01:03:59 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:03:59.688039 | orchestrator | 2026-01-13 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:02.733405 | orchestrator | 2026-01-13 01:04:02 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:02.735781 | orchestrator | 2026-01-13 01:04:02 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:02.738440 | orchestrator | 2026-01-13 01:04:02 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:02.740384 | orchestrator | 2026-01-13 01:04:02 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:04:02.740421 | orchestrator | 2026-01-13 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:05.789739 | orchestrator | 2026-01-13 01:04:05 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:05.792054 | orchestrator | 2026-01-13 01:04:05 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:05.793863 | orchestrator | 2026-01-13 01:04:05 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:05.795773 | orchestrator | 2026-01-13 01:04:05 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state STARTED 2026-01-13 01:04:05.795812 | orchestrator | 2026-01-13 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:08.840739 | orchestrator | 2026-01-13 01:04:08 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:08.841313 | orchestrator | 2026-01-13 01:04:08 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:08.842960 | orchestrator | 2026-01-13 01:04:08 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:08.843150 | orchestrator | 2026-01-13 01:04:08 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:08.847435 | orchestrator | 2026-01-13 01:04:08 | INFO  | Task 1463e705-f901-4fd7-827c-1c234e776e5a is in state SUCCESS 2026-01-13 01:04:08.848697 | orchestrator | 2026-01-13 01:04:08.848732 | orchestrator | 2026-01-13 01:04:08.848738 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:04:08.848743 | orchestrator | 2026-01-13 01:04:08.848748 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:04:08.848764 | orchestrator | Tuesday 13 January 2026 01:00:55 +0000 (0:00:00.209) 0:00:00.209 ******* 2026-01-13 01:04:08.848769 | orchestrator | ok: [testbed-manager] 2026-01-13 01:04:08.848774 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:04:08.848778 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:04:08.848783 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:04:08.848787 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:04:08.848791 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:04:08.848795 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:04:08.848800 | orchestrator | 2026-01-13 01:04:08.848804 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:04:08.848809 | orchestrator | Tuesday 13 January 2026 01:00:56 +0000 (0:00:00.596) 0:00:00.806 ******* 2026-01-13 01:04:08.848813 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-13 01:04:08.848818 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-13 01:04:08.848822 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-13 01:04:08.848826 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-13 01:04:08.848830 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-13 01:04:08.848835 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-13 01:04:08.848839 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-13 01:04:08.848843 | orchestrator | 2026-01-13 01:04:08.848872 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-13 01:04:08.848879 | orchestrator | 2026-01-13 01:04:08.848883 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-13 01:04:08.848887 | orchestrator | Tuesday 13 January 2026 01:00:56 +0000 (0:00:00.602) 0:00:01.408 ******* 2026-01-13 01:04:08.848892 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 01:04:08.848939 | orchestrator | 2026-01-13 01:04:08.848946 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-13 01:04:08.848953 | orchestrator | Tuesday 13 January 2026 01:00:58 +0000 (0:00:01.257) 0:00:02.665 ******* 2026-01-13 01:04:08.849012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.849049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.849057 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-13 01:04:08.849094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.849109 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.849114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.849119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849132 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.849136 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.849141 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849156 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849161 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849298 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849356 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849375 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-13 01:04:08.849387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849408 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849493 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849542 | orchestrator | 2026-01-13 01:04:08.849546 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-13 01:04:08.849551 | orchestrator | Tuesday 13 January 2026 01:01:00 +0000 (0:00:02.649) 0:00:05.315 ******* 2026-01-13 01:04:08.849556 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 01:04:08.849561 | orchestrator | 2026-01-13 01:04:08.849565 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-13 01:04:08.849569 | orchestrator | Tuesday 13 January 2026 01:01:02 +0000 (0:00:01.435) 0:00:06.751 ******* 2026-01-13 01:04:08.849574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.849582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.849590 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-13 01:04:08.849594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.849603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.849607 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.849612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849622 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.849630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849635 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.849640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849661 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.849678 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849683 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849709 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.849715 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-13 01:04:08.849720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.850214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.850267 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.850274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.850374 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.850399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.850409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.850493 | orchestrator | 2026-01-13 01:04:08.850502 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-13 01:04:08.850510 | orchestrator | Tuesday 13 January 2026 01:01:08 +0000 (0:00:06.126) 0:00:12.877 ******* 2026-01-13 01:04:08.850518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 01:04:08.850545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.850557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.850565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.850572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.850597 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-13 01:04:08.850606 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 01:04:08.850613 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.850627 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-13 01:04:08.850639 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.850647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 01:04:08.850653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.850670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.850676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.850680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.850689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 01:04:08.850693 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:08.850698 | orchestrator | skipping: [testbed-manager] 2026-01-13 01:04:08.850702 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:08.850707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.850714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.850719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.850723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.850728 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:08.850743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 01:04:08.850748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.850755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.850760 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:04:08.850764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 01:04:08.850771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.850775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.850780 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:04:08.850784 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 01:04:08.850789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.850804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.850813 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:04:08.850817 | orchestrator | 2026-01-13 01:04:08.850822 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-13 01:04:08.850826 | orchestrator | Tuesday 13 January 2026 01:01:09 +0000 (0:00:01.526) 0:00:14.403 ******* 2026-01-13 01:04:08.850831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 01:04:08.850835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.850840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.850846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.850851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.850855 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:08.850860 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-13 01:04:08.850874 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 01:04:08.850882 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.850887 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-13 01:04:08.850892 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.850897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 01:04:08.850901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.851042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.851055 | orchestrator | skipping: [testbed-manager] 2026-01-13 01:04:08.851073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.851078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.851101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 01:04:08.851106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.851112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.851117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.851122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-13 01:04:08.851126 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:08.851130 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:08.851148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 01:04:08.851157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.851161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.851166 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:04:08.851170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 01:04:08.851175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.851181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.851186 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:04:08.851191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-13 01:04:08.851195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.851215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-13 01:04:08.851221 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:04:08.851225 | orchestrator | 2026-01-13 01:04:08.851229 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-13 01:04:08.851234 | orchestrator | Tuesday 13 January 2026 01:01:12 +0000 (0:00:02.295) 0:00:16.699 ******* 2026-01-13 01:04:08.851239 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-13 01:04:08.851243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.851248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.851254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.851259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.851266 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.851282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.851287 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.851292 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.851296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.851301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.851308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.851313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.851320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.851336 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.851342 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.851347 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.851351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.851356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.851371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.851379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.851394 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-13 01:04:08.851400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.851405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.851409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.851414 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.851420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.851428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.851432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.851437 | orchestrator | 2026-01-13 01:04:08.851441 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-13 01:04:08.851446 | orchestrator | Tuesday 13 January 2026 01:01:18 +0000 (0:00:06.763) 0:00:23.463 ******* 2026-01-13 01:04:08.851450 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 01:04:08.851454 | orchestrator | 2026-01-13 01:04:08.851459 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-13 01:04:08.851475 | orchestrator | Tuesday 13 January 2026 01:01:20 +0000 (0:00:01.770) 0:00:25.234 ******* 2026-01-13 01:04:08.851480 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1318086, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.213797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851485 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1318086, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.213797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851490 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1318119, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2192214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851497 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1318086, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.213797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851505 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1318086, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.213797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851509 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1318086, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.213797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.851525 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1318086, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.213797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851536 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1318119, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2192214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851543 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1318086, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.213797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851550 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1318119, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2192214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851561 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1318075, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.21301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851571 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1318119, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2192214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851578 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1318119, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2192214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851602 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1318075, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.21301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851610 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1318119, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2192214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851617 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1318075, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.21301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851624 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1318105, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2176554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851638 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1318105, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2176554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851645 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1318075, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.21301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851651 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1318105, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2176554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851674 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1317917, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1927145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851682 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1318119, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2192214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.851689 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1318105, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2176554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851695 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1318075, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.21301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851709 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1318075, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.21301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851716 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1317917, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1927145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851723 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1317917, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1927145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851730 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1318089, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2150676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851755 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1318105, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2176554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851763 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1317917, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1927145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851771 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1318089, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2150676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851783 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1318105, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2176554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851794 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1318089, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2150676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851800 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1317917, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1927145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851806 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1317917, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1927145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851832 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1318102, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2172074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851842 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1318075, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.21301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.851849 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1318102, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2172074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851860 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1318089, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2150676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851870 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1318089, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2150676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851877 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1318089, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2150676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851885 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1318102, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2172074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851911 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1318092, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2158182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851919 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1318102, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2172074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851942 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1318092, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2158182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851948 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1318092, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2158182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851955 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1318102, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2172074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851960 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1318081, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2137249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851964 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1318105, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2176554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.851983 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1318092, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2158182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851988 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1318102, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2172074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.851996 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1318081, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2137249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852001 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1318081, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2137249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852008 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1318092, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2158182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852013 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318115, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2188346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852018 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1318081, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2137249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852035 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1318092, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2158182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852040 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318115, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2188346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852047 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318115, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2188346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852052 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1318081, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2137249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852060 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1317912, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.192061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852065 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1317917, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1927145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.852070 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1318081, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2137249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852087 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1317912, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.192061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852095 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1317912, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.192061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852099 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318115, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2188346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852103 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318115, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2188346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852109 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1318133, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2209988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852113 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318115, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2188346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852117 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1318133, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2209988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852133 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1318133, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2209988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852140 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1317912, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.192061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852145 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1317912, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.192061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852149 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1318112, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2185857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852155 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1318112, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2185857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852160 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1317912, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.192061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852164 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1318089, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2150676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.852170 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1317921, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2124681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852177 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1318112, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2185857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852181 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1317921, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2124681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852185 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1317921, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2124681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852191 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1318133, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2209988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852195 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1318133, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2209988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852199 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1318133, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2209988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852207 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1317913, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.192258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852215 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1318112, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2185857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852219 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1317913, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.192258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852223 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1317913, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.192258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852229 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1318112, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2185857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852235 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1317921, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2124681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852242 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1318099, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2165592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852257 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1318099, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2165592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852264 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1318102, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2172074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.852271 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1318096, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2159517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852277 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1318099, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2165592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852308 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1318112, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2185857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852317 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1317921, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2124681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852324 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1318096, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2159517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852340 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1318131, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2204368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852347 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:08.852354 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1317913, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.192258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852361 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1317913, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.192258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852366 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1318096, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2159517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852375 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1317921, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2124681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852382 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1318131, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2204368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852389 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:04:08.852401 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1318099, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2165592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852411 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1318099, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2165592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852419 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1318131, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2204368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852423 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:04:08.852427 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1317913, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.192258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852431 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1318096, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2159517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852438 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1318096, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2159517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852442 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1318092, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2158182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.852450 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1318099, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2165592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852457 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1318131, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2204368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852461 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:04:08.852465 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1318081, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2137249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.852469 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1318131, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2204368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852473 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:08.852477 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1318096, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2159517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852483 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1318131, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2204368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-13 01:04:08.852487 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:08.852494 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318115, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2188346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.852498 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1317912, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.192061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.852505 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1318133, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2209988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.852510 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1318112, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2185857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.852514 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1317921, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2124681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.852518 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1317913, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.192258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.852524 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1318099, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2165592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.852533 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1318096, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2159517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.852537 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1318131, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.2204368, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-13 01:04:08.852541 | orchestrator | 2026-01-13 01:04:08.852546 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-13 01:04:08.852550 | orchestrator | Tuesday 13 January 2026 01:01:49 +0000 (0:00:28.699) 0:00:53.933 ******* 2026-01-13 01:04:08.852554 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 01:04:08.852558 | orchestrator | 2026-01-13 01:04:08.852564 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-13 01:04:08.852568 | orchestrator | Tuesday 13 January 2026 01:01:50 +0000 (0:00:00.720) 0:00:54.654 ******* 2026-01-13 01:04:08.852572 | orchestrator | [WARNING]: Skipped 2026-01-13 01:04:08.852576 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-13 01:04:08.852580 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-13 01:04:08.852584 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-13 01:04:08.852588 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-13 01:04:08.852592 | orchestrator | [WARNING]: Skipped 2026-01-13 01:04:08.852596 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-13 01:04:08.852600 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-13 01:04:08.852604 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-13 01:04:08.852608 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-13 01:04:08.852612 | orchestrator | [WARNING]: Skipped 2026-01-13 01:04:08.852616 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-13 01:04:08.852620 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-13 01:04:08.852624 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-13 01:04:08.852628 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-13 01:04:08.852632 | orchestrator | [WARNING]: Skipped 2026-01-13 01:04:08.852636 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-13 01:04:08.852640 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-13 01:04:08.852644 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-13 01:04:08.852648 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-13 01:04:08.852652 | orchestrator | [WARNING]: Skipped 2026-01-13 01:04:08.852656 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-13 01:04:08.852660 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-13 01:04:08.852666 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-13 01:04:08.852670 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-13 01:04:08.852674 | orchestrator | [WARNING]: Skipped 2026-01-13 01:04:08.852678 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-13 01:04:08.852682 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-13 01:04:08.852686 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-13 01:04:08.852690 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-13 01:04:08.852694 | orchestrator | [WARNING]: Skipped 2026-01-13 01:04:08.852698 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-13 01:04:08.852702 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-13 01:04:08.852708 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-13 01:04:08.852712 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-13 01:04:08.852716 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 01:04:08.852720 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-13 01:04:08.852724 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 01:04:08.852728 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-13 01:04:08.852731 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-13 01:04:08.852735 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-13 01:04:08.852739 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-13 01:04:08.852743 | orchestrator | 2026-01-13 01:04:08.852748 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-13 01:04:08.852755 | orchestrator | Tuesday 13 January 2026 01:01:52 +0000 (0:00:02.029) 0:00:56.684 ******* 2026-01-13 01:04:08.852765 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-13 01:04:08.852772 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:08.852779 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-13 01:04:08.852785 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:08.852792 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-13 01:04:08.852797 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:08.852804 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-13 01:04:08.852809 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:04:08.852815 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-13 01:04:08.852822 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:04:08.852828 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-13 01:04:08.852835 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:04:08.852841 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-13 01:04:08.852848 | orchestrator | 2026-01-13 01:04:08.852855 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-13 01:04:08.852862 | orchestrator | Tuesday 13 January 2026 01:02:11 +0000 (0:00:19.058) 0:01:15.742 ******* 2026-01-13 01:04:08.852869 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-13 01:04:08.852880 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:08.852887 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-13 01:04:08.852894 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:08.852900 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-13 01:04:08.852906 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:08.852913 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-13 01:04:08.852939 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-13 01:04:08.852947 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:04:08.852954 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:04:08.852961 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-13 01:04:08.852965 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:04:08.852969 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-13 01:04:08.852973 | orchestrator | 2026-01-13 01:04:08.852977 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-13 01:04:08.852981 | orchestrator | Tuesday 13 January 2026 01:02:14 +0000 (0:00:03.602) 0:01:19.345 ******* 2026-01-13 01:04:08.852985 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-13 01:04:08.852989 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:08.852993 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-13 01:04:08.852997 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:08.853001 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-13 01:04:08.853005 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:08.853009 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-13 01:04:08.853013 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:04:08.853017 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-13 01:04:08.853021 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:04:08.853025 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-13 01:04:08.853029 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:04:08.853033 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-13 01:04:08.853037 | orchestrator | 2026-01-13 01:04:08.853044 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-13 01:04:08.853048 | orchestrator | Tuesday 13 January 2026 01:02:16 +0000 (0:00:01.424) 0:01:20.769 ******* 2026-01-13 01:04:08.853052 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 01:04:08.853056 | orchestrator | 2026-01-13 01:04:08.853060 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-13 01:04:08.853064 | orchestrator | Tuesday 13 January 2026 01:02:16 +0000 (0:00:00.620) 0:01:21.390 ******* 2026-01-13 01:04:08.853068 | orchestrator | skipping: [testbed-manager] 2026-01-13 01:04:08.853072 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:08.853076 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:08.853080 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:08.853084 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:04:08.853088 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:04:08.853091 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:04:08.853095 | orchestrator | 2026-01-13 01:04:08.853099 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-13 01:04:08.853103 | orchestrator | Tuesday 13 January 2026 01:02:17 +0000 (0:00:00.580) 0:01:21.970 ******* 2026-01-13 01:04:08.853107 | orchestrator | skipping: [testbed-manager] 2026-01-13 01:04:08.853111 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:04:08.853115 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:04:08.853119 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:04:08.853126 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:08.853130 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:04:08.853134 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:04:08.853137 | orchestrator | 2026-01-13 01:04:08.853141 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-13 01:04:08.853145 | orchestrator | Tuesday 13 January 2026 01:02:19 +0000 (0:00:01.764) 0:01:23.734 ******* 2026-01-13 01:04:08.853149 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-13 01:04:08.853153 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-13 01:04:08.853157 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:08.853161 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-13 01:04:08.853166 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:08.853170 | orchestrator | skipping: [testbed-manager] 2026-01-13 01:04:08.853173 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-13 01:04:08.853177 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:08.853184 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-13 01:04:08.853188 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:04:08.853192 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-13 01:04:08.853196 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:04:08.853200 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-13 01:04:08.853204 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:04:08.853208 | orchestrator | 2026-01-13 01:04:08.853212 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-13 01:04:08.853216 | orchestrator | Tuesday 13 January 2026 01:02:21 +0000 (0:00:01.857) 0:01:25.592 ******* 2026-01-13 01:04:08.853220 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-13 01:04:08.853224 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:08.853228 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-13 01:04:08.853232 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-13 01:04:08.853236 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:08.853240 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:08.853244 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-13 01:04:08.853248 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-13 01:04:08.853252 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:04:08.853256 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-13 01:04:08.853260 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:04:08.853264 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-13 01:04:08.853268 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:04:08.853272 | orchestrator | 2026-01-13 01:04:08.853277 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-13 01:04:08.853281 | orchestrator | Tuesday 13 January 2026 01:02:22 +0000 (0:00:01.575) 0:01:27.168 ******* 2026-01-13 01:04:08.853285 | orchestrator | [WARNING]: Skipped 2026-01-13 01:04:08.853289 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-13 01:04:08.853293 | orchestrator | due to this access issue: 2026-01-13 01:04:08.853297 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-13 01:04:08.853304 | orchestrator | not a directory 2026-01-13 01:04:08.853308 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 01:04:08.853312 | orchestrator | 2026-01-13 01:04:08.853316 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-13 01:04:08.853320 | orchestrator | Tuesday 13 January 2026 01:02:23 +0000 (0:00:01.011) 0:01:28.179 ******* 2026-01-13 01:04:08.853324 | orchestrator | skipping: [testbed-manager] 2026-01-13 01:04:08.853328 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:08.853334 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:08.853338 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:08.853342 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:04:08.853346 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:04:08.853350 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:04:08.853354 | orchestrator | 2026-01-13 01:04:08.853358 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-13 01:04:08.853362 | orchestrator | Tuesday 13 January 2026 01:02:24 +0000 (0:00:00.705) 0:01:28.885 ******* 2026-01-13 01:04:08.853366 | orchestrator | skipping: [testbed-manager] 2026-01-13 01:04:08.853370 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:08.853374 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:08.853378 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:08.853382 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:04:08.853386 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:04:08.853390 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:04:08.853394 | orchestrator | 2026-01-13 01:04:08.853398 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-13 01:04:08.853402 | orchestrator | Tuesday 13 January 2026 01:02:25 +0000 (0:00:00.672) 0:01:29.558 ******* 2026-01-13 01:04:08.853407 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-13 01:04:08.853415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.853419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.853424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.853437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.853447 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.853457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.853464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-13 01:04:08.853471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.853481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.853488 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.853495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.853512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.853526 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.853534 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.853542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.853557 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-13 01:04:08.853567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.853579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.853586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.853594 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.853605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.853613 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.853620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.853633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.853639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-13 01:04:08.853646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.853650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.853657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-13 01:04:08.853661 | orchestrator | 2026-01-13 01:04:08.853665 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-13 01:04:08.853669 | orchestrator | Tuesday 13 January 2026 01:02:28 +0000 (0:00:03.618) 0:01:33.177 ******* 2026-01-13 01:04:08.853673 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-13 01:04:08.853677 | orchestrator | skipping: [testbed-manager] 2026-01-13 01:04:08.853681 | orchestrator | 2026-01-13 01:04:08.853685 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-13 01:04:08.853689 | orchestrator | Tuesday 13 January 2026 01:02:29 +0000 (0:00:01.319) 0:01:34.497 ******* 2026-01-13 01:04:08.853693 | orchestrator | 2026-01-13 01:04:08.853696 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-13 01:04:08.853700 | orchestrator | Tuesday 13 January 2026 01:02:30 +0000 (0:00:00.081) 0:01:34.578 ******* 2026-01-13 01:04:08.853704 | orchestrator | 2026-01-13 01:04:08.853708 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-13 01:04:08.853712 | orchestrator | Tuesday 13 January 2026 01:02:30 +0000 (0:00:00.064) 0:01:34.643 ******* 2026-01-13 01:04:08.853716 | orchestrator | 2026-01-13 01:04:08.853720 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-13 01:04:08.853724 | orchestrator | Tuesday 13 January 2026 01:02:30 +0000 (0:00:00.059) 0:01:34.702 ******* 2026-01-13 01:04:08.853728 | orchestrator | 2026-01-13 01:04:08.853732 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-13 01:04:08.853736 | orchestrator | Tuesday 13 January 2026 01:02:30 +0000 (0:00:00.225) 0:01:34.928 ******* 2026-01-13 01:04:08.853740 | orchestrator | 2026-01-13 01:04:08.853743 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-13 01:04:08.853747 | orchestrator | Tuesday 13 January 2026 01:02:30 +0000 (0:00:00.063) 0:01:34.991 ******* 2026-01-13 01:04:08.853751 | orchestrator | 2026-01-13 01:04:08.853755 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-13 01:04:08.853759 | orchestrator | Tuesday 13 January 2026 01:02:30 +0000 (0:00:00.063) 0:01:35.055 ******* 2026-01-13 01:04:08.853765 | orchestrator | 2026-01-13 01:04:08.853769 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-13 01:04:08.853773 | orchestrator | Tuesday 13 January 2026 01:02:30 +0000 (0:00:00.085) 0:01:35.141 ******* 2026-01-13 01:04:08.853777 | orchestrator | changed: [testbed-manager] 2026-01-13 01:04:08.853781 | orchestrator | 2026-01-13 01:04:08.853785 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-13 01:04:08.853791 | orchestrator | Tuesday 13 January 2026 01:02:47 +0000 (0:00:16.704) 0:01:51.846 ******* 2026-01-13 01:04:08.853795 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:04:08.853799 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:04:08.853803 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:08.853807 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:04:08.853811 | orchestrator | changed: [testbed-manager] 2026-01-13 01:04:08.853815 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:04:08.853819 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:04:08.853822 | orchestrator | 2026-01-13 01:04:08.853826 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-13 01:04:08.853830 | orchestrator | Tuesday 13 January 2026 01:03:01 +0000 (0:00:14.270) 0:02:06.116 ******* 2026-01-13 01:04:08.853834 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:04:08.853838 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:04:08.853842 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:08.853846 | orchestrator | 2026-01-13 01:04:08.853850 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-13 01:04:08.853854 | orchestrator | Tuesday 13 January 2026 01:03:11 +0000 (0:00:10.338) 0:02:16.455 ******* 2026-01-13 01:04:08.853858 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:08.853862 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:04:08.853866 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:04:08.853869 | orchestrator | 2026-01-13 01:04:08.853873 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-13 01:04:08.853877 | orchestrator | Tuesday 13 January 2026 01:03:24 +0000 (0:00:12.314) 0:02:28.769 ******* 2026-01-13 01:04:08.853881 | orchestrator | changed: [testbed-manager] 2026-01-13 01:04:08.853885 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:04:08.853889 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:04:08.853893 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:04:08.853897 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:04:08.853901 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:08.853904 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:04:08.853908 | orchestrator | 2026-01-13 01:04:08.853912 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-13 01:04:08.853916 | orchestrator | Tuesday 13 January 2026 01:03:38 +0000 (0:00:14.651) 0:02:43.420 ******* 2026-01-13 01:04:08.853920 | orchestrator | changed: [testbed-manager] 2026-01-13 01:04:08.853953 | orchestrator | 2026-01-13 01:04:08.853957 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-13 01:04:08.853961 | orchestrator | Tuesday 13 January 2026 01:03:47 +0000 (0:00:08.192) 0:02:51.613 ******* 2026-01-13 01:04:08.853965 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:04:08.853969 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:04:08.853973 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:08.853977 | orchestrator | 2026-01-13 01:04:08.853980 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-13 01:04:08.853984 | orchestrator | Tuesday 13 January 2026 01:03:51 +0000 (0:00:04.742) 0:02:56.356 ******* 2026-01-13 01:04:08.853988 | orchestrator | changed: [testbed-manager] 2026-01-13 01:04:08.853992 | orchestrator | 2026-01-13 01:04:08.853996 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-13 01:04:08.854000 | orchestrator | Tuesday 13 January 2026 01:03:56 +0000 (0:00:04.951) 0:03:01.308 ******* 2026-01-13 01:04:08.854004 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:04:08.854011 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:04:08.854046 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:04:08.854050 | orchestrator | 2026-01-13 01:04:08.854054 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:04:08.854060 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-13 01:04:08.854065 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-13 01:04:08.854069 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-13 01:04:08.854073 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-13 01:04:08.854077 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-13 01:04:08.854081 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-13 01:04:08.854085 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-13 01:04:08.854088 | orchestrator | 2026-01-13 01:04:08.854092 | orchestrator | 2026-01-13 01:04:08.854096 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:04:08.854100 | orchestrator | Tuesday 13 January 2026 01:04:06 +0000 (0:00:09.826) 0:03:11.134 ******* 2026-01-13 01:04:08.854104 | orchestrator | =============================================================================== 2026-01-13 01:04:08.854108 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 28.70s 2026-01-13 01:04:08.854112 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 19.06s 2026-01-13 01:04:08.854116 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.70s 2026-01-13 01:04:08.854120 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.65s 2026-01-13 01:04:08.854124 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.27s 2026-01-13 01:04:08.854131 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.31s 2026-01-13 01:04:08.854136 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.34s 2026-01-13 01:04:08.854139 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.83s 2026-01-13 01:04:08.854143 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.19s 2026-01-13 01:04:08.854147 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.76s 2026-01-13 01:04:08.854151 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.13s 2026-01-13 01:04:08.854155 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.95s 2026-01-13 01:04:08.854159 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 4.74s 2026-01-13 01:04:08.854163 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.62s 2026-01-13 01:04:08.854167 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.60s 2026-01-13 01:04:08.854171 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.65s 2026-01-13 01:04:08.854175 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.30s 2026-01-13 01:04:08.854179 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.03s 2026-01-13 01:04:08.854183 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.86s 2026-01-13 01:04:08.854190 | orchestrator | prometheus : Find custom prometheus alert rules files ------------------- 1.77s 2026-01-13 01:04:08.854194 | orchestrator | 2026-01-13 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:11.891156 | orchestrator | 2026-01-13 01:04:11 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:11.892507 | orchestrator | 2026-01-13 01:04:11 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:11.893268 | orchestrator | 2026-01-13 01:04:11 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:11.894174 | orchestrator | 2026-01-13 01:04:11 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:11.894209 | orchestrator | 2026-01-13 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:14.937692 | orchestrator | 2026-01-13 01:04:14 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:14.939172 | orchestrator | 2026-01-13 01:04:14 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:14.941259 | orchestrator | 2026-01-13 01:04:14 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:14.943454 | orchestrator | 2026-01-13 01:04:14 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:14.943506 | orchestrator | 2026-01-13 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:17.983867 | orchestrator | 2026-01-13 01:04:17 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:17.986179 | orchestrator | 2026-01-13 01:04:17 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:17.987219 | orchestrator | 2026-01-13 01:04:17 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:17.988298 | orchestrator | 2026-01-13 01:04:17 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:17.988350 | orchestrator | 2026-01-13 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:21.058103 | orchestrator | 2026-01-13 01:04:21 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:21.058143 | orchestrator | 2026-01-13 01:04:21 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:21.060422 | orchestrator | 2026-01-13 01:04:21 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:21.061685 | orchestrator | 2026-01-13 01:04:21 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:21.062091 | orchestrator | 2026-01-13 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:24.106977 | orchestrator | 2026-01-13 01:04:24 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:24.107034 | orchestrator | 2026-01-13 01:04:24 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:24.107042 | orchestrator | 2026-01-13 01:04:24 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:24.107048 | orchestrator | 2026-01-13 01:04:24 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:24.107055 | orchestrator | 2026-01-13 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:27.147359 | orchestrator | 2026-01-13 01:04:27 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:27.147878 | orchestrator | 2026-01-13 01:04:27 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:27.148497 | orchestrator | 2026-01-13 01:04:27 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:27.149149 | orchestrator | 2026-01-13 01:04:27 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:27.149314 | orchestrator | 2026-01-13 01:04:27 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:30.176533 | orchestrator | 2026-01-13 01:04:30 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:30.178699 | orchestrator | 2026-01-13 01:04:30 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:30.179486 | orchestrator | 2026-01-13 01:04:30 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:30.180419 | orchestrator | 2026-01-13 01:04:30 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:30.180455 | orchestrator | 2026-01-13 01:04:30 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:33.212465 | orchestrator | 2026-01-13 01:04:33 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:33.214180 | orchestrator | 2026-01-13 01:04:33 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:33.214228 | orchestrator | 2026-01-13 01:04:33 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:33.214949 | orchestrator | 2026-01-13 01:04:33 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:33.214966 | orchestrator | 2026-01-13 01:04:33 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:36.266855 | orchestrator | 2026-01-13 01:04:36 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:36.270621 | orchestrator | 2026-01-13 01:04:36 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:36.270694 | orchestrator | 2026-01-13 01:04:36 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:36.272159 | orchestrator | 2026-01-13 01:04:36 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:36.272200 | orchestrator | 2026-01-13 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:39.316017 | orchestrator | 2026-01-13 01:04:39 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:39.317644 | orchestrator | 2026-01-13 01:04:39 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:39.319360 | orchestrator | 2026-01-13 01:04:39 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:39.321374 | orchestrator | 2026-01-13 01:04:39 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:39.321610 | orchestrator | 2026-01-13 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:42.365797 | orchestrator | 2026-01-13 01:04:42 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:42.368531 | orchestrator | 2026-01-13 01:04:42 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:42.372354 | orchestrator | 2026-01-13 01:04:42 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:42.375144 | orchestrator | 2026-01-13 01:04:42 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:42.375205 | orchestrator | 2026-01-13 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:45.413048 | orchestrator | 2026-01-13 01:04:45 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:45.413607 | orchestrator | 2026-01-13 01:04:45 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:45.414380 | orchestrator | 2026-01-13 01:04:45 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state STARTED 2026-01-13 01:04:45.416412 | orchestrator | 2026-01-13 01:04:45 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:45.416438 | orchestrator | 2026-01-13 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:48.467853 | orchestrator | 2026-01-13 01:04:48 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:04:48.467947 | orchestrator | 2026-01-13 01:04:48 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:48.468668 | orchestrator | 2026-01-13 01:04:48 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:48.469885 | orchestrator | 2026-01-13 01:04:48 | INFO  | Task 3416391c-81e1-4e37-b2f3-229cd199ed0b is in state SUCCESS 2026-01-13 01:04:48.471997 | orchestrator | 2026-01-13 01:04:48.472028 | orchestrator | 2026-01-13 01:04:48.472037 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:04:48.472045 | orchestrator | 2026-01-13 01:04:48.472052 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:04:48.472059 | orchestrator | Tuesday 13 January 2026 01:02:02 +0000 (0:00:00.245) 0:00:00.245 ******* 2026-01-13 01:04:48.472066 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:04:48.472073 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:04:48.472080 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:04:48.472086 | orchestrator | 2026-01-13 01:04:48.472093 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:04:48.472100 | orchestrator | Tuesday 13 January 2026 01:02:03 +0000 (0:00:00.251) 0:00:00.496 ******* 2026-01-13 01:04:48.472106 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-01-13 01:04:48.472113 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-01-13 01:04:48.472120 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-01-13 01:04:48.472127 | orchestrator | 2026-01-13 01:04:48.472134 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-01-13 01:04:48.472140 | orchestrator | 2026-01-13 01:04:48.472147 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-13 01:04:48.472154 | orchestrator | Tuesday 13 January 2026 01:02:03 +0000 (0:00:00.343) 0:00:00.840 ******* 2026-01-13 01:04:48.472161 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:04:48.472168 | orchestrator | 2026-01-13 01:04:48.472174 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-01-13 01:04:48.472180 | orchestrator | Tuesday 13 January 2026 01:02:03 +0000 (0:00:00.446) 0:00:01.286 ******* 2026-01-13 01:04:48.472186 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-01-13 01:04:48.472193 | orchestrator | 2026-01-13 01:04:48.472199 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-01-13 01:04:48.472206 | orchestrator | Tuesday 13 January 2026 01:02:08 +0000 (0:00:04.079) 0:00:05.365 ******* 2026-01-13 01:04:48.472212 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-01-13 01:04:48.472219 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-01-13 01:04:48.472226 | orchestrator | 2026-01-13 01:04:48.472232 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-01-13 01:04:48.472238 | orchestrator | Tuesday 13 January 2026 01:02:14 +0000 (0:00:06.159) 0:00:11.525 ******* 2026-01-13 01:04:48.472300 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-13 01:04:48.472322 | orchestrator | 2026-01-13 01:04:48.472329 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-01-13 01:04:48.472335 | orchestrator | Tuesday 13 January 2026 01:02:17 +0000 (0:00:02.823) 0:00:14.348 ******* 2026-01-13 01:04:48.472341 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-13 01:04:48.472347 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-01-13 01:04:48.472354 | orchestrator | 2026-01-13 01:04:48.472360 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-01-13 01:04:48.472367 | orchestrator | Tuesday 13 January 2026 01:02:20 +0000 (0:00:03.436) 0:00:17.784 ******* 2026-01-13 01:04:48.472373 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-13 01:04:48.472380 | orchestrator | 2026-01-13 01:04:48.472386 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-01-13 01:04:48.472393 | orchestrator | Tuesday 13 January 2026 01:02:23 +0000 (0:00:03.059) 0:00:20.844 ******* 2026-01-13 01:04:48.472399 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-01-13 01:04:48.472405 | orchestrator | 2026-01-13 01:04:48.472412 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-01-13 01:04:48.472418 | orchestrator | Tuesday 13 January 2026 01:02:26 +0000 (0:00:03.334) 0:00:24.178 ******* 2026-01-13 01:04:48.472531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 01:04:48.472548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 01:04:48.472562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 01:04:48.472570 | orchestrator | 2026-01-13 01:04:48.472577 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-13 01:04:48.472584 | orchestrator | Tuesday 13 January 2026 01:02:30 +0000 (0:00:03.420) 0:00:27.599 ******* 2026-01-13 01:04:48.472591 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:04:48.472598 | orchestrator | 2026-01-13 01:04:48.472605 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-01-13 01:04:48.472616 | orchestrator | Tuesday 13 January 2026 01:02:30 +0000 (0:00:00.663) 0:00:28.262 ******* 2026-01-13 01:04:48.472623 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:48.472630 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:04:48.472637 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:04:48.472643 | orchestrator | 2026-01-13 01:04:48.472650 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-01-13 01:04:48.472657 | orchestrator | Tuesday 13 January 2026 01:02:34 +0000 (0:00:03.638) 0:00:31.901 ******* 2026-01-13 01:04:48.472663 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-13 01:04:48.472671 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-13 01:04:48.472677 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-13 01:04:48.472683 | orchestrator | 2026-01-13 01:04:48.472690 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-01-13 01:04:48.472701 | orchestrator | Tuesday 13 January 2026 01:02:36 +0000 (0:00:01.656) 0:00:33.557 ******* 2026-01-13 01:04:48.472707 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-13 01:04:48.472714 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-13 01:04:48.472720 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-13 01:04:48.472727 | orchestrator | 2026-01-13 01:04:48.472733 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-01-13 01:04:48.472740 | orchestrator | Tuesday 13 January 2026 01:02:37 +0000 (0:00:01.292) 0:00:34.850 ******* 2026-01-13 01:04:48.472746 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:04:48.472753 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:04:48.472759 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:04:48.472766 | orchestrator | 2026-01-13 01:04:48.472773 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-01-13 01:04:48.472779 | orchestrator | Tuesday 13 January 2026 01:02:38 +0000 (0:00:00.648) 0:00:35.499 ******* 2026-01-13 01:04:48.472786 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:48.472792 | orchestrator | 2026-01-13 01:04:48.472799 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-01-13 01:04:48.472805 | orchestrator | Tuesday 13 January 2026 01:02:38 +0000 (0:00:00.375) 0:00:35.875 ******* 2026-01-13 01:04:48.472815 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:48.472822 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:48.472828 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:48.472835 | orchestrator | 2026-01-13 01:04:48.472842 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-13 01:04:48.472848 | orchestrator | Tuesday 13 January 2026 01:02:38 +0000 (0:00:00.291) 0:00:36.166 ******* 2026-01-13 01:04:48.472855 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:04:48.472862 | orchestrator | 2026-01-13 01:04:48.472868 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-01-13 01:04:48.472875 | orchestrator | Tuesday 13 January 2026 01:02:39 +0000 (0:00:00.531) 0:00:36.698 ******* 2026-01-13 01:04:48.472882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 01:04:48.472900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 01:04:48.472944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 01:04:48.472952 | orchestrator | 2026-01-13 01:04:48.472959 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-01-13 01:04:48.472966 | orchestrator | Tuesday 13 January 2026 01:02:43 +0000 (0:00:04.540) 0:00:41.239 ******* 2026-01-13 01:04:48.472977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-13 01:04:48.472990 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:48.472999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-13 01:04:48.473006 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:48.473017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-13 01:04:48.473028 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:48.473035 | orchestrator | 2026-01-13 01:04:48.473041 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-01-13 01:04:48.473048 | orchestrator | Tuesday 13 January 2026 01:02:48 +0000 (0:00:04.068) 0:00:45.307 ******* 2026-01-13 01:04:48.473064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-13 01:04:48.473071 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:48.473078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-13 01:04:48.473089 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:48.473101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-13 01:04:48.473109 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:48.473115 | orchestrator | 2026-01-13 01:04:48.473122 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-01-13 01:04:48.473129 | orchestrator | Tuesday 13 January 2026 01:02:54 +0000 (0:00:06.092) 0:00:51.399 ******* 2026-01-13 01:04:48.473138 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:48.473145 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:48.473152 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:48.473159 | orchestrator | 2026-01-13 01:04:48.473165 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-01-13 01:04:48.473171 | orchestrator | Tuesday 13 January 2026 01:02:58 +0000 (0:00:04.250) 0:00:55.650 ******* 2026-01-13 01:04:48.473178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 01:04:48.473194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 01:04:48.473206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 01:04:48.473213 | orchestrator | 2026-01-13 01:04:48.473220 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-01-13 01:04:48.473230 | orchestrator | Tuesday 13 January 2026 01:03:02 +0000 (0:00:03.935) 0:00:59.585 ******* 2026-01-13 01:04:48.473237 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:48.473244 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:04:48.473250 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:04:48.473257 | orchestrator | 2026-01-13 01:04:48.473263 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-01-13 01:04:48.473270 | orchestrator | Tuesday 13 January 2026 01:03:08 +0000 (0:00:05.897) 0:01:05.482 ******* 2026-01-13 01:04:48.473276 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:48.473283 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:48.473289 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:48.473296 | orchestrator | 2026-01-13 01:04:48.473302 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-01-13 01:04:48.473308 | orchestrator | Tuesday 13 January 2026 01:03:11 +0000 (0:00:03.500) 0:01:08.983 ******* 2026-01-13 01:04:48.473315 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:48.473321 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:48.473328 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:48.473334 | orchestrator | 2026-01-13 01:04:48.473341 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-01-13 01:04:48.473347 | orchestrator | Tuesday 13 January 2026 01:03:16 +0000 (0:00:04.952) 0:01:13.935 ******* 2026-01-13 01:04:48.473354 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:48.473364 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:48.473371 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:48.473377 | orchestrator | 2026-01-13 01:04:48.473384 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-01-13 01:04:48.473390 | orchestrator | Tuesday 13 January 2026 01:03:19 +0000 (0:00:03.028) 0:01:16.963 ******* 2026-01-13 01:04:48.473396 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:48.473402 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:48.473409 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:48.473415 | orchestrator | 2026-01-13 01:04:48.473422 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-01-13 01:04:48.473428 | orchestrator | Tuesday 13 January 2026 01:03:22 +0000 (0:00:03.011) 0:01:19.974 ******* 2026-01-13 01:04:48.473435 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:48.473441 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:48.473447 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:48.473454 | orchestrator | 2026-01-13 01:04:48.473460 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-01-13 01:04:48.473467 | orchestrator | Tuesday 13 January 2026 01:03:22 +0000 (0:00:00.261) 0:01:20.236 ******* 2026-01-13 01:04:48.473473 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-13 01:04:48.473480 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:48.473487 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-13 01:04:48.473493 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:48.473500 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-13 01:04:48.473507 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:48.473513 | orchestrator | 2026-01-13 01:04:48.473520 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-01-13 01:04:48.473527 | orchestrator | Tuesday 13 January 2026 01:03:29 +0000 (0:00:06.723) 0:01:26.959 ******* 2026-01-13 01:04:48.473533 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:48.473539 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:04:48.473546 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:04:48.473553 | orchestrator | 2026-01-13 01:04:48.473560 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-01-13 01:04:48.473570 | orchestrator | Tuesday 13 January 2026 01:03:35 +0000 (0:00:05.818) 0:01:32.778 ******* 2026-01-13 01:04:48.473581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 01:04:48.473594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 01:04:48.473605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-13 01:04:48.473616 | orchestrator | 2026-01-13 01:04:48.473622 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-13 01:04:48.473629 | orchestrator | Tuesday 13 January 2026 01:03:38 +0000 (0:00:03.476) 0:01:36.255 ******* 2026-01-13 01:04:48.473635 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:04:48.473641 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:04:48.473648 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:04:48.473654 | orchestrator | 2026-01-13 01:04:48.473660 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-01-13 01:04:48.473666 | orchestrator | Tuesday 13 January 2026 01:03:39 +0000 (0:00:00.254) 0:01:36.509 ******* 2026-01-13 01:04:48.473673 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:48.473679 | orchestrator | 2026-01-13 01:04:48.473686 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-01-13 01:04:48.473693 | orchestrator | Tuesday 13 January 2026 01:03:41 +0000 (0:00:02.055) 0:01:38.565 ******* 2026-01-13 01:04:48.473699 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:48.473706 | orchestrator | 2026-01-13 01:04:48.473713 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-01-13 01:04:48.473720 | orchestrator | Tuesday 13 January 2026 01:03:43 +0000 (0:00:02.452) 0:01:41.017 ******* 2026-01-13 01:04:48.473727 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:48.473733 | orchestrator | 2026-01-13 01:04:48.473739 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-01-13 01:04:48.473745 | orchestrator | Tuesday 13 January 2026 01:03:45 +0000 (0:00:02.183) 0:01:43.201 ******* 2026-01-13 01:04:48.473752 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:48.473758 | orchestrator | 2026-01-13 01:04:48.473765 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-01-13 01:04:48.473772 | orchestrator | Tuesday 13 January 2026 01:04:13 +0000 (0:00:27.604) 0:02:10.805 ******* 2026-01-13 01:04:48.473779 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:48.473785 | orchestrator | 2026-01-13 01:04:48.473792 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-13 01:04:48.473799 | orchestrator | Tuesday 13 January 2026 01:04:15 +0000 (0:00:02.075) 0:02:12.881 ******* 2026-01-13 01:04:48.473806 | orchestrator | 2026-01-13 01:04:48.473817 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-13 01:04:48.473823 | orchestrator | Tuesday 13 January 2026 01:04:15 +0000 (0:00:00.337) 0:02:13.219 ******* 2026-01-13 01:04:48.473830 | orchestrator | 2026-01-13 01:04:48.473837 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-13 01:04:48.473843 | orchestrator | Tuesday 13 January 2026 01:04:15 +0000 (0:00:00.070) 0:02:13.289 ******* 2026-01-13 01:04:48.473849 | orchestrator | 2026-01-13 01:04:48.473856 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-01-13 01:04:48.473863 | orchestrator | Tuesday 13 January 2026 01:04:16 +0000 (0:00:00.082) 0:02:13.372 ******* 2026-01-13 01:04:48.473874 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:04:48.473881 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:04:48.473888 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:04:48.473894 | orchestrator | 2026-01-13 01:04:48.473901 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:04:48.473923 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-13 01:04:48.473931 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-13 01:04:48.473938 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-13 01:04:48.473945 | orchestrator | 2026-01-13 01:04:48.473953 | orchestrator | 2026-01-13 01:04:48.473960 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:04:48.473967 | orchestrator | Tuesday 13 January 2026 01:04:45 +0000 (0:00:29.430) 0:02:42.803 ******* 2026-01-13 01:04:48.473973 | orchestrator | =============================================================================== 2026-01-13 01:04:48.473979 | orchestrator | glance : Restart glance-api container ---------------------------------- 29.43s 2026-01-13 01:04:48.473985 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.60s 2026-01-13 01:04:48.473992 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 6.72s 2026-01-13 01:04:48.473999 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.16s 2026-01-13 01:04:48.474005 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 6.09s 2026-01-13 01:04:48.474012 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.90s 2026-01-13 01:04:48.474057 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.82s 2026-01-13 01:04:48.474064 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.95s 2026-01-13 01:04:48.474071 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.54s 2026-01-13 01:04:48.474078 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.25s 2026-01-13 01:04:48.474085 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.08s 2026-01-13 01:04:48.474092 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.07s 2026-01-13 01:04:48.474099 | orchestrator | glance : Copying over config.json files for services -------------------- 3.94s 2026-01-13 01:04:48.474105 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.64s 2026-01-13 01:04:48.474112 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.50s 2026-01-13 01:04:48.474120 | orchestrator | glance : Check glance containers ---------------------------------------- 3.48s 2026-01-13 01:04:48.474127 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.44s 2026-01-13 01:04:48.474134 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.42s 2026-01-13 01:04:48.474141 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.33s 2026-01-13 01:04:48.474148 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.06s 2026-01-13 01:04:48.474156 | orchestrator | 2026-01-13 01:04:48 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:48.474163 | orchestrator | 2026-01-13 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:51.522313 | orchestrator | 2026-01-13 01:04:51 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:04:51.523804 | orchestrator | 2026-01-13 01:04:51 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:51.525079 | orchestrator | 2026-01-13 01:04:51 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:51.526679 | orchestrator | 2026-01-13 01:04:51 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:51.527076 | orchestrator | 2026-01-13 01:04:51 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:54.571079 | orchestrator | 2026-01-13 01:04:54 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:04:54.573387 | orchestrator | 2026-01-13 01:04:54 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:54.575339 | orchestrator | 2026-01-13 01:04:54 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:54.577057 | orchestrator | 2026-01-13 01:04:54 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:54.577109 | orchestrator | 2026-01-13 01:04:54 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:04:57.618390 | orchestrator | 2026-01-13 01:04:57 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:04:57.620381 | orchestrator | 2026-01-13 01:04:57 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:04:57.622120 | orchestrator | 2026-01-13 01:04:57 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:04:57.624197 | orchestrator | 2026-01-13 01:04:57 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:04:57.624233 | orchestrator | 2026-01-13 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:00.668252 | orchestrator | 2026-01-13 01:05:00 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:00.668307 | orchestrator | 2026-01-13 01:05:00 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:00.669393 | orchestrator | 2026-01-13 01:05:00 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:00.671026 | orchestrator | 2026-01-13 01:05:00 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:05:00.671129 | orchestrator | 2026-01-13 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:03.703979 | orchestrator | 2026-01-13 01:05:03 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:03.705151 | orchestrator | 2026-01-13 01:05:03 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:03.707583 | orchestrator | 2026-01-13 01:05:03 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:03.709014 | orchestrator | 2026-01-13 01:05:03 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:05:03.709322 | orchestrator | 2026-01-13 01:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:06.754210 | orchestrator | 2026-01-13 01:05:06 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:06.754639 | orchestrator | 2026-01-13 01:05:06 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:06.756783 | orchestrator | 2026-01-13 01:05:06 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:06.758570 | orchestrator | 2026-01-13 01:05:06 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state STARTED 2026-01-13 01:05:06.758620 | orchestrator | 2026-01-13 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:09.805356 | orchestrator | 2026-01-13 01:05:09 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:09.806044 | orchestrator | 2026-01-13 01:05:09 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:09.807179 | orchestrator | 2026-01-13 01:05:09 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:09.811464 | orchestrator | 2026-01-13 01:05:09 | INFO  | Task 2e5038ab-5a75-401a-82a0-f3bb852931c1 is in state SUCCESS 2026-01-13 01:05:09.811732 | orchestrator | 2026-01-13 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:09.812960 | orchestrator | 2026-01-13 01:05:09.812998 | orchestrator | 2026-01-13 01:05:09.813006 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:05:09.813012 | orchestrator | 2026-01-13 01:05:09.813017 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:05:09.813023 | orchestrator | Tuesday 13 January 2026 01:02:23 +0000 (0:00:00.226) 0:00:00.227 ******* 2026-01-13 01:05:09.813029 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:05:09.813035 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:05:09.813041 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:05:09.813046 | orchestrator | 2026-01-13 01:05:09.813051 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:05:09.813056 | orchestrator | Tuesday 13 January 2026 01:02:24 +0000 (0:00:00.280) 0:00:00.507 ******* 2026-01-13 01:05:09.813061 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-01-13 01:05:09.813067 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-01-13 01:05:09.813072 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-01-13 01:05:09.813077 | orchestrator | 2026-01-13 01:05:09.813083 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-01-13 01:05:09.813088 | orchestrator | 2026-01-13 01:05:09.813093 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-13 01:05:09.813098 | orchestrator | Tuesday 13 January 2026 01:02:24 +0000 (0:00:00.346) 0:00:00.853 ******* 2026-01-13 01:05:09.813138 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:05:09.813145 | orchestrator | 2026-01-13 01:05:09.813150 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-01-13 01:05:09.813155 | orchestrator | Tuesday 13 January 2026 01:02:24 +0000 (0:00:00.480) 0:00:01.334 ******* 2026-01-13 01:05:09.813161 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-01-13 01:05:09.813166 | orchestrator | 2026-01-13 01:05:09.813171 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-01-13 01:05:09.813177 | orchestrator | Tuesday 13 January 2026 01:02:28 +0000 (0:00:03.247) 0:00:04.581 ******* 2026-01-13 01:05:09.813182 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-01-13 01:05:09.813189 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-01-13 01:05:09.813194 | orchestrator | 2026-01-13 01:05:09.813199 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-01-13 01:05:09.813255 | orchestrator | Tuesday 13 January 2026 01:02:33 +0000 (0:00:05.869) 0:00:10.451 ******* 2026-01-13 01:05:09.813262 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-13 01:05:09.813267 | orchestrator | 2026-01-13 01:05:09.813273 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-01-13 01:05:09.813278 | orchestrator | Tuesday 13 January 2026 01:02:37 +0000 (0:00:03.811) 0:00:14.263 ******* 2026-01-13 01:05:09.813283 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-13 01:05:09.813289 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-01-13 01:05:09.813338 | orchestrator | 2026-01-13 01:05:09.813344 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-01-13 01:05:09.813363 | orchestrator | Tuesday 13 January 2026 01:02:41 +0000 (0:00:03.702) 0:00:17.965 ******* 2026-01-13 01:05:09.813553 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-13 01:05:09.813559 | orchestrator | 2026-01-13 01:05:09.813564 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-01-13 01:05:09.813570 | orchestrator | Tuesday 13 January 2026 01:02:45 +0000 (0:00:03.850) 0:00:21.816 ******* 2026-01-13 01:05:09.813575 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-01-13 01:05:09.813580 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-01-13 01:05:09.813586 | orchestrator | 2026-01-13 01:05:09.813591 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-01-13 01:05:09.813604 | orchestrator | Tuesday 13 January 2026 01:02:52 +0000 (0:00:07.064) 0:00:28.880 ******* 2026-01-13 01:05:09.813639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.813656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.813662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.813668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.813679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.813689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.813788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.813810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.813817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.813823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.813832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.813841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.813847 | orchestrator | 2026-01-13 01:05:09.813852 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-13 01:05:09.813858 | orchestrator | Tuesday 13 January 2026 01:02:55 +0000 (0:00:02.696) 0:00:31.577 ******* 2026-01-13 01:05:09.813863 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:05:09.813868 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:05:09.813874 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:05:09.813879 | orchestrator | 2026-01-13 01:05:09.813884 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-13 01:05:09.813889 | orchestrator | Tuesday 13 January 2026 01:02:55 +0000 (0:00:00.515) 0:00:32.092 ******* 2026-01-13 01:05:09.813923 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:05:09.813930 | orchestrator | 2026-01-13 01:05:09.813935 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-01-13 01:05:09.813941 | orchestrator | Tuesday 13 January 2026 01:02:56 +0000 (0:00:01.128) 0:00:33.221 ******* 2026-01-13 01:05:09.813963 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-01-13 01:05:09.813970 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-01-13 01:05:09.813975 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-01-13 01:05:09.813980 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-01-13 01:05:09.813986 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-01-13 01:05:09.813991 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-01-13 01:05:09.813996 | orchestrator | 2026-01-13 01:05:09.814001 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-01-13 01:05:09.814006 | orchestrator | Tuesday 13 January 2026 01:02:58 +0000 (0:00:01.896) 0:00:35.117 ******* 2026-01-13 01:05:09.814043 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-13 01:05:09.814058 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-13 01:05:09.814067 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-13 01:05:09.814073 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-13 01:05:09.814096 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-13 01:05:09.814103 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-13 01:05:09.814114 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-13 01:05:09.814120 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-13 01:05:09.814128 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-13 01:05:09.814147 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-13 01:05:09.814154 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-13 01:05:09.814163 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-13 01:05:09.814169 | orchestrator | 2026-01-13 01:05:09.814175 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-01-13 01:05:09.814181 | orchestrator | Tuesday 13 January 2026 01:03:01 +0000 (0:00:03.239) 0:00:38.357 ******* 2026-01-13 01:05:09.814187 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-13 01:05:09.814193 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-13 01:05:09.814198 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-13 01:05:09.814204 | orchestrator | 2026-01-13 01:05:09.814209 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-01-13 01:05:09.814214 | orchestrator | Tuesday 13 January 2026 01:03:04 +0000 (0:00:02.436) 0:00:40.794 ******* 2026-01-13 01:05:09.814220 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-01-13 01:05:09.814225 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-01-13 01:05:09.814230 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-01-13 01:05:09.814236 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-01-13 01:05:09.814241 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-01-13 01:05:09.814246 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-01-13 01:05:09.814252 | orchestrator | 2026-01-13 01:05:09.814257 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-01-13 01:05:09.814269 | orchestrator | Tuesday 13 January 2026 01:03:07 +0000 (0:00:03.127) 0:00:43.921 ******* 2026-01-13 01:05:09.814275 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-01-13 01:05:09.814280 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-01-13 01:05:09.814285 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-01-13 01:05:09.814291 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-01-13 01:05:09.814296 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-01-13 01:05:09.814301 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-01-13 01:05:09.814307 | orchestrator | 2026-01-13 01:05:09.814313 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-01-13 01:05:09.814318 | orchestrator | Tuesday 13 January 2026 01:03:08 +0000 (0:00:00.976) 0:00:44.898 ******* 2026-01-13 01:05:09.814323 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:05:09.814328 | orchestrator | 2026-01-13 01:05:09.814334 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-01-13 01:05:09.814338 | orchestrator | Tuesday 13 January 2026 01:03:08 +0000 (0:00:00.097) 0:00:44.996 ******* 2026-01-13 01:05:09.814344 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:05:09.814348 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:05:09.814353 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:05:09.814362 | orchestrator | 2026-01-13 01:05:09.814367 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-13 01:05:09.814372 | orchestrator | Tuesday 13 January 2026 01:03:08 +0000 (0:00:00.336) 0:00:45.332 ******* 2026-01-13 01:05:09.814377 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:05:09.814386 | orchestrator | 2026-01-13 01:05:09.814391 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-01-13 01:05:09.814415 | orchestrator | Tuesday 13 January 2026 01:03:09 +0000 (0:00:00.646) 0:00:45.979 ******* 2026-01-13 01:05:09.814422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.814429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.814435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.814445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814520 | orchestrator | 2026-01-13 01:05:09.814525 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-01-13 01:05:09.814531 | orchestrator | Tuesday 13 January 2026 01:03:14 +0000 (0:00:04.885) 0:00:50.864 ******* 2026-01-13 01:05:09.814537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-13 01:05:09.814542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814566 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:05:09.814576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-13 01:05:09.814582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814600 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:05:09.814608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-13 01:05:09.814619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814641 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:05:09.814647 | orchestrator | 2026-01-13 01:05:09.814652 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-01-13 01:05:09.814658 | orchestrator | Tuesday 13 January 2026 01:03:15 +0000 (0:00:01.128) 0:00:51.993 ******* 2026-01-13 01:05:09.814664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-13 01:05:09.814672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814698 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:05:09.814704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-13 01:05:09.814709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814733 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:05:09.814739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-13 01:05:09.814747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.814764 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:05:09.814770 | orchestrator | 2026-01-13 01:05:09.814775 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-01-13 01:05:09.814785 | orchestrator | Tuesday 13 January 2026 01:03:16 +0000 (0:00:01.347) 0:00:53.341 ******* 2026-01-13 01:05:09.814794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.814801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.814809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.814815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814875 | orchestrator | 2026-01-13 01:05:09.814879 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-01-13 01:05:09.814883 | orchestrator | Tuesday 13 January 2026 01:03:20 +0000 (0:00:04.119) 0:00:57.460 ******* 2026-01-13 01:05:09.814889 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-13 01:05:09.814910 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-13 01:05:09.814916 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-13 01:05:09.814921 | orchestrator | 2026-01-13 01:05:09.814925 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-01-13 01:05:09.814930 | orchestrator | Tuesday 13 January 2026 01:03:22 +0000 (0:00:01.838) 0:00:59.299 ******* 2026-01-13 01:05:09.814938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.814943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.814948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.814957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.814997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.815005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.815012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.815018 | orchestrator | 2026-01-13 01:05:09.815023 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-01-13 01:05:09.815029 | orchestrator | Tuesday 13 January 2026 01:03:38 +0000 (0:00:15.185) 0:01:14.485 ******* 2026-01-13 01:05:09.815036 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:05:09.815041 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:05:09.815047 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:05:09.815050 | orchestrator | 2026-01-13 01:05:09.815054 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-01-13 01:05:09.815059 | orchestrator | Tuesday 13 January 2026 01:03:39 +0000 (0:00:01.468) 0:01:15.953 ******* 2026-01-13 01:05:09.815063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-13 01:05:09.815069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.815073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.815078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.815082 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:05:09.815085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-13 01:05:09.815092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.815096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.815101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.815105 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:05:09.815108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-13 01:05:09.815114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.815118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.815124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-13 01:05:09.815130 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:05:09.815133 | orchestrator | 2026-01-13 01:05:09.815137 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-01-13 01:05:09.815140 | orchestrator | Tuesday 13 January 2026 01:03:40 +0000 (0:00:00.582) 0:01:16.536 ******* 2026-01-13 01:05:09.815144 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:05:09.815147 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:05:09.815150 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:05:09.815154 | orchestrator | 2026-01-13 01:05:09.815157 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-01-13 01:05:09.815160 | orchestrator | Tuesday 13 January 2026 01:03:40 +0000 (0:00:00.289) 0:01:16.825 ******* 2026-01-13 01:05:09.815164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.815167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.815172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-13 01:05:09.815178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.815185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.815189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.815192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.815196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.815200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.815206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.815213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.815216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-13 01:05:09.815220 | orchestrator | 2026-01-13 01:05:09.815223 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-13 01:05:09.815226 | orchestrator | Tuesday 13 January 2026 01:03:43 +0000 (0:00:03.248) 0:01:20.073 ******* 2026-01-13 01:05:09.815229 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:05:09.815232 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:05:09.815235 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:05:09.815239 | orchestrator | 2026-01-13 01:05:09.815242 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-01-13 01:05:09.815245 | orchestrator | Tuesday 13 January 2026 01:03:44 +0000 (0:00:00.697) 0:01:20.771 ******* 2026-01-13 01:05:09.815248 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:05:09.815251 | orchestrator | 2026-01-13 01:05:09.815254 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-01-13 01:05:09.815257 | orchestrator | Tuesday 13 January 2026 01:03:46 +0000 (0:00:02.178) 0:01:22.950 ******* 2026-01-13 01:05:09.815261 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:05:09.815264 | orchestrator | 2026-01-13 01:05:09.815268 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-01-13 01:05:09.815271 | orchestrator | Tuesday 13 January 2026 01:03:48 +0000 (0:00:02.224) 0:01:25.175 ******* 2026-01-13 01:05:09.815274 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:05:09.815277 | orchestrator | 2026-01-13 01:05:09.815280 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-13 01:05:09.815283 | orchestrator | Tuesday 13 January 2026 01:04:08 +0000 (0:00:19.633) 0:01:44.808 ******* 2026-01-13 01:05:09.815287 | orchestrator | 2026-01-13 01:05:09.815290 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-13 01:05:09.815293 | orchestrator | Tuesday 13 January 2026 01:04:08 +0000 (0:00:00.067) 0:01:44.876 ******* 2026-01-13 01:05:09.815296 | orchestrator | 2026-01-13 01:05:09.815300 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-13 01:05:09.815305 | orchestrator | Tuesday 13 January 2026 01:04:08 +0000 (0:00:00.065) 0:01:44.941 ******* 2026-01-13 01:05:09.815308 | orchestrator | 2026-01-13 01:05:09.815312 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-01-13 01:05:09.815315 | orchestrator | Tuesday 13 January 2026 01:04:08 +0000 (0:00:00.064) 0:01:45.005 ******* 2026-01-13 01:05:09.815321 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:05:09.815324 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:05:09.815327 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:05:09.815330 | orchestrator | 2026-01-13 01:05:09.815333 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-01-13 01:05:09.815338 | orchestrator | Tuesday 13 January 2026 01:04:32 +0000 (0:00:24.405) 0:02:09.411 ******* 2026-01-13 01:05:09.815343 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:05:09.815349 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:05:09.815354 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:05:09.815359 | orchestrator | 2026-01-13 01:05:09.815364 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-01-13 01:05:09.815369 | orchestrator | Tuesday 13 January 2026 01:04:43 +0000 (0:00:10.515) 0:02:19.927 ******* 2026-01-13 01:05:09.815373 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:05:09.815378 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:05:09.815383 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:05:09.815388 | orchestrator | 2026-01-13 01:05:09.815393 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-01-13 01:05:09.815398 | orchestrator | Tuesday 13 January 2026 01:05:01 +0000 (0:00:17.661) 0:02:37.588 ******* 2026-01-13 01:05:09.815402 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:05:09.815407 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:05:09.815412 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:05:09.815417 | orchestrator | 2026-01-13 01:05:09.815422 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-01-13 01:05:09.815431 | orchestrator | Tuesday 13 January 2026 01:05:06 +0000 (0:00:05.728) 0:02:43.317 ******* 2026-01-13 01:05:09.815436 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:05:09.815441 | orchestrator | 2026-01-13 01:05:09.815446 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:05:09.815451 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-13 01:05:09.815456 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-13 01:05:09.815461 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-13 01:05:09.815465 | orchestrator | 2026-01-13 01:05:09.815471 | orchestrator | 2026-01-13 01:05:09.815475 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:05:09.815480 | orchestrator | Tuesday 13 January 2026 01:05:07 +0000 (0:00:00.259) 0:02:43.577 ******* 2026-01-13 01:05:09.815485 | orchestrator | =============================================================================== 2026-01-13 01:05:09.815490 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.41s 2026-01-13 01:05:09.815495 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.63s 2026-01-13 01:05:09.815500 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 17.66s 2026-01-13 01:05:09.815505 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 15.19s 2026-01-13 01:05:09.815511 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.52s 2026-01-13 01:05:09.815516 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.06s 2026-01-13 01:05:09.815522 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.87s 2026-01-13 01:05:09.815527 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.73s 2026-01-13 01:05:09.815532 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.89s 2026-01-13 01:05:09.815537 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.12s 2026-01-13 01:05:09.815547 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.85s 2026-01-13 01:05:09.815552 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.81s 2026-01-13 01:05:09.815557 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.70s 2026-01-13 01:05:09.815562 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.25s 2026-01-13 01:05:09.815566 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.25s 2026-01-13 01:05:09.815571 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.24s 2026-01-13 01:05:09.815575 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.13s 2026-01-13 01:05:09.815580 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.70s 2026-01-13 01:05:09.815585 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.44s 2026-01-13 01:05:09.815590 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.22s 2026-01-13 01:05:12.862269 | orchestrator | 2026-01-13 01:05:12 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:12.864229 | orchestrator | 2026-01-13 01:05:12 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:12.867184 | orchestrator | 2026-01-13 01:05:12 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:12.867232 | orchestrator | 2026-01-13 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:15.913935 | orchestrator | 2026-01-13 01:05:15 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:15.915440 | orchestrator | 2026-01-13 01:05:15 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:15.916878 | orchestrator | 2026-01-13 01:05:15 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:15.916979 | orchestrator | 2026-01-13 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:18.964579 | orchestrator | 2026-01-13 01:05:18 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:18.967451 | orchestrator | 2026-01-13 01:05:18 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:18.969986 | orchestrator | 2026-01-13 01:05:18 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:18.970062 | orchestrator | 2026-01-13 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:22.011697 | orchestrator | 2026-01-13 01:05:22 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:22.013079 | orchestrator | 2026-01-13 01:05:22 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:22.014118 | orchestrator | 2026-01-13 01:05:22 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:22.014188 | orchestrator | 2026-01-13 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:25.058549 | orchestrator | 2026-01-13 01:05:25 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:25.059799 | orchestrator | 2026-01-13 01:05:25 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:25.062791 | orchestrator | 2026-01-13 01:05:25 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:25.062872 | orchestrator | 2026-01-13 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:28.111780 | orchestrator | 2026-01-13 01:05:28 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:28.115509 | orchestrator | 2026-01-13 01:05:28 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:28.118532 | orchestrator | 2026-01-13 01:05:28 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:28.118741 | orchestrator | 2026-01-13 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:31.170392 | orchestrator | 2026-01-13 01:05:31 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:31.170841 | orchestrator | 2026-01-13 01:05:31 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:31.171656 | orchestrator | 2026-01-13 01:05:31 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:31.171692 | orchestrator | 2026-01-13 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:34.225514 | orchestrator | 2026-01-13 01:05:34 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:34.227167 | orchestrator | 2026-01-13 01:05:34 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:34.229111 | orchestrator | 2026-01-13 01:05:34 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:34.229156 | orchestrator | 2026-01-13 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:37.261866 | orchestrator | 2026-01-13 01:05:37 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:37.263109 | orchestrator | 2026-01-13 01:05:37 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:37.267163 | orchestrator | 2026-01-13 01:05:37 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:37.267260 | orchestrator | 2026-01-13 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:40.314103 | orchestrator | 2026-01-13 01:05:40 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:40.315439 | orchestrator | 2026-01-13 01:05:40 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:40.317026 | orchestrator | 2026-01-13 01:05:40 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:40.317071 | orchestrator | 2026-01-13 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:43.366215 | orchestrator | 2026-01-13 01:05:43 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:43.367141 | orchestrator | 2026-01-13 01:05:43 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:43.370127 | orchestrator | 2026-01-13 01:05:43 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:43.370177 | orchestrator | 2026-01-13 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:46.431318 | orchestrator | 2026-01-13 01:05:46 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:46.433721 | orchestrator | 2026-01-13 01:05:46 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:46.436420 | orchestrator | 2026-01-13 01:05:46 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:46.436577 | orchestrator | 2026-01-13 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:49.478584 | orchestrator | 2026-01-13 01:05:49 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:49.481495 | orchestrator | 2026-01-13 01:05:49 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:49.483101 | orchestrator | 2026-01-13 01:05:49 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:49.483198 | orchestrator | 2026-01-13 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:52.532322 | orchestrator | 2026-01-13 01:05:52 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:52.534109 | orchestrator | 2026-01-13 01:05:52 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:52.537651 | orchestrator | 2026-01-13 01:05:52 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:52.538085 | orchestrator | 2026-01-13 01:05:52 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:55.579620 | orchestrator | 2026-01-13 01:05:55 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:55.580186 | orchestrator | 2026-01-13 01:05:55 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:55.581403 | orchestrator | 2026-01-13 01:05:55 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:55.581432 | orchestrator | 2026-01-13 01:05:55 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:05:58.621022 | orchestrator | 2026-01-13 01:05:58 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:05:58.621515 | orchestrator | 2026-01-13 01:05:58 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:05:58.623106 | orchestrator | 2026-01-13 01:05:58 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:05:58.623130 | orchestrator | 2026-01-13 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:01.677683 | orchestrator | 2026-01-13 01:06:01 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:01.679530 | orchestrator | 2026-01-13 01:06:01 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:01.681036 | orchestrator | 2026-01-13 01:06:01 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:01.681074 | orchestrator | 2026-01-13 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:04.724995 | orchestrator | 2026-01-13 01:06:04 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:04.726350 | orchestrator | 2026-01-13 01:06:04 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:04.727567 | orchestrator | 2026-01-13 01:06:04 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:04.727616 | orchestrator | 2026-01-13 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:07.788398 | orchestrator | 2026-01-13 01:06:07 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:07.789954 | orchestrator | 2026-01-13 01:06:07 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:07.791822 | orchestrator | 2026-01-13 01:06:07 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:07.791913 | orchestrator | 2026-01-13 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:10.838877 | orchestrator | 2026-01-13 01:06:10 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:10.840920 | orchestrator | 2026-01-13 01:06:10 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:10.842242 | orchestrator | 2026-01-13 01:06:10 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:10.842694 | orchestrator | 2026-01-13 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:13.895985 | orchestrator | 2026-01-13 01:06:13 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:13.898732 | orchestrator | 2026-01-13 01:06:13 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:13.901406 | orchestrator | 2026-01-13 01:06:13 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:13.901469 | orchestrator | 2026-01-13 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:16.961119 | orchestrator | 2026-01-13 01:06:16 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:16.962132 | orchestrator | 2026-01-13 01:06:16 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:16.963245 | orchestrator | 2026-01-13 01:06:16 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:16.963280 | orchestrator | 2026-01-13 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:20.021209 | orchestrator | 2026-01-13 01:06:20 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:20.021261 | orchestrator | 2026-01-13 01:06:20 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:20.021266 | orchestrator | 2026-01-13 01:06:20 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:20.021270 | orchestrator | 2026-01-13 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:23.064495 | orchestrator | 2026-01-13 01:06:23 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:23.064572 | orchestrator | 2026-01-13 01:06:23 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:23.067032 | orchestrator | 2026-01-13 01:06:23 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:23.067081 | orchestrator | 2026-01-13 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:26.117712 | orchestrator | 2026-01-13 01:06:26 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:26.119137 | orchestrator | 2026-01-13 01:06:26 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:26.120445 | orchestrator | 2026-01-13 01:06:26 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:26.120473 | orchestrator | 2026-01-13 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:29.178132 | orchestrator | 2026-01-13 01:06:29 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:29.180220 | orchestrator | 2026-01-13 01:06:29 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:29.182982 | orchestrator | 2026-01-13 01:06:29 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:29.183018 | orchestrator | 2026-01-13 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:32.225774 | orchestrator | 2026-01-13 01:06:32 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:32.226937 | orchestrator | 2026-01-13 01:06:32 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:32.228882 | orchestrator | 2026-01-13 01:06:32 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:32.229080 | orchestrator | 2026-01-13 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:35.273172 | orchestrator | 2026-01-13 01:06:35 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:35.276142 | orchestrator | 2026-01-13 01:06:35 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:35.276210 | orchestrator | 2026-01-13 01:06:35 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:35.276497 | orchestrator | 2026-01-13 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:38.324167 | orchestrator | 2026-01-13 01:06:38 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:38.326640 | orchestrator | 2026-01-13 01:06:38 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:38.328566 | orchestrator | 2026-01-13 01:06:38 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:38.328604 | orchestrator | 2026-01-13 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:41.373853 | orchestrator | 2026-01-13 01:06:41 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:41.374303 | orchestrator | 2026-01-13 01:06:41 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:41.375152 | orchestrator | 2026-01-13 01:06:41 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:41.375583 | orchestrator | 2026-01-13 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:44.422916 | orchestrator | 2026-01-13 01:06:44 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:44.425466 | orchestrator | 2026-01-13 01:06:44 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:44.426481 | orchestrator | 2026-01-13 01:06:44 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:44.426514 | orchestrator | 2026-01-13 01:06:44 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:47.484068 | orchestrator | 2026-01-13 01:06:47 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:47.485476 | orchestrator | 2026-01-13 01:06:47 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state STARTED 2026-01-13 01:06:47.488429 | orchestrator | 2026-01-13 01:06:47 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:47.488470 | orchestrator | 2026-01-13 01:06:47 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:50.537983 | orchestrator | 2026-01-13 01:06:50 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:50.540133 | orchestrator | 2026-01-13 01:06:50 | INFO  | Task 44d3327a-5e93-4172-95e8-8f720dfb7d65 is in state SUCCESS 2026-01-13 01:06:50.542779 | orchestrator | 2026-01-13 01:06:50 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:50.544079 | orchestrator | 2026-01-13 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:53.598651 | orchestrator | 2026-01-13 01:06:53 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:53.599481 | orchestrator | 2026-01-13 01:06:53 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:06:53.600934 | orchestrator | 2026-01-13 01:06:53 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:53.600973 | orchestrator | 2026-01-13 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:56.642558 | orchestrator | 2026-01-13 01:06:56 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state STARTED 2026-01-13 01:06:56.644250 | orchestrator | 2026-01-13 01:06:56 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:06:56.645477 | orchestrator | 2026-01-13 01:06:56 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:56.645524 | orchestrator | 2026-01-13 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:06:59.701638 | orchestrator | 2026-01-13 01:06:59 | INFO  | Task f3493f45-d74b-4bf5-afa0-2506ace92edf is in state SUCCESS 2026-01-13 01:06:59.704055 | orchestrator | 2026-01-13 01:06:59.704110 | orchestrator | 2026-01-13 01:06:59.704119 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:06:59.704125 | orchestrator | 2026-01-13 01:06:59.704130 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:06:59.704137 | orchestrator | Tuesday 13 January 2026 01:04:11 +0000 (0:00:00.184) 0:00:00.185 ******* 2026-01-13 01:06:59.704141 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:06:59.704144 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:06:59.704148 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:06:59.704151 | orchestrator | 2026-01-13 01:06:59.704154 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:06:59.704158 | orchestrator | Tuesday 13 January 2026 01:04:11 +0000 (0:00:00.347) 0:00:00.532 ******* 2026-01-13 01:06:59.704161 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-01-13 01:06:59.704165 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-01-13 01:06:59.704176 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-01-13 01:06:59.704179 | orchestrator | 2026-01-13 01:06:59.704183 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-01-13 01:06:59.704186 | orchestrator | 2026-01-13 01:06:59.704190 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-01-13 01:06:59.704193 | orchestrator | Tuesday 13 January 2026 01:04:12 +0000 (0:00:01.181) 0:00:01.714 ******* 2026-01-13 01:06:59.704196 | orchestrator | 2026-01-13 01:06:59.704199 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-01-13 01:06:59.704202 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:06:59.704206 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:06:59.704209 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:06:59.704212 | orchestrator | 2026-01-13 01:06:59.704217 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:06:59.704222 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:06:59.704231 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:06:59.704239 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:06:59.704243 | orchestrator | 2026-01-13 01:06:59.704248 | orchestrator | 2026-01-13 01:06:59.704252 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:06:59.704257 | orchestrator | Tuesday 13 January 2026 01:06:49 +0000 (0:02:36.889) 0:02:38.604 ******* 2026-01-13 01:06:59.704262 | orchestrator | =============================================================================== 2026-01-13 01:06:59.704267 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 156.89s 2026-01-13 01:06:59.704271 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.18s 2026-01-13 01:06:59.704276 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-01-13 01:06:59.704281 | orchestrator | 2026-01-13 01:06:59.704286 | orchestrator | 2026-01-13 01:06:59.704290 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:06:59.704308 | orchestrator | 2026-01-13 01:06:59.704314 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:06:59.704318 | orchestrator | Tuesday 13 January 2026 01:04:50 +0000 (0:00:00.275) 0:00:00.275 ******* 2026-01-13 01:06:59.704323 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:06:59.704328 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:06:59.704333 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:06:59.704339 | orchestrator | 2026-01-13 01:06:59.704343 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:06:59.704348 | orchestrator | Tuesday 13 January 2026 01:04:50 +0000 (0:00:00.318) 0:00:00.594 ******* 2026-01-13 01:06:59.704354 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-13 01:06:59.704357 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-13 01:06:59.704360 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-13 01:06:59.704363 | orchestrator | 2026-01-13 01:06:59.704366 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-13 01:06:59.704370 | orchestrator | 2026-01-13 01:06:59.704373 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-13 01:06:59.704376 | orchestrator | Tuesday 13 January 2026 01:04:51 +0000 (0:00:00.443) 0:00:01.038 ******* 2026-01-13 01:06:59.704379 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:06:59.704383 | orchestrator | 2026-01-13 01:06:59.704386 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-13 01:06:59.704389 | orchestrator | Tuesday 13 January 2026 01:04:51 +0000 (0:00:00.487) 0:00:01.525 ******* 2026-01-13 01:06:59.704394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.704408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.704415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.704419 | orchestrator | 2026-01-13 01:06:59.704426 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-13 01:06:59.704433 | orchestrator | Tuesday 13 January 2026 01:04:52 +0000 (0:00:00.704) 0:00:02.230 ******* 2026-01-13 01:06:59.704442 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-13 01:06:59.704447 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-13 01:06:59.704452 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 01:06:59.704457 | orchestrator | 2026-01-13 01:06:59.704461 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-13 01:06:59.704466 | orchestrator | Tuesday 13 January 2026 01:04:53 +0000 (0:00:00.824) 0:00:03.055 ******* 2026-01-13 01:06:59.704470 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:06:59.704580 | orchestrator | 2026-01-13 01:06:59.704589 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-13 01:06:59.704593 | orchestrator | Tuesday 13 January 2026 01:04:54 +0000 (0:00:00.678) 0:00:03.733 ******* 2026-01-13 01:06:59.704599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.704876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.704896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.704903 | orchestrator | 2026-01-13 01:06:59.704917 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-13 01:06:59.704923 | orchestrator | Tuesday 13 January 2026 01:04:55 +0000 (0:00:01.341) 0:00:05.075 ******* 2026-01-13 01:06:59.704962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-13 01:06:59.704971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-13 01:06:59.704985 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:06:59.704991 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:06:59.704997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-13 01:06:59.705002 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:06:59.705008 | orchestrator | 2026-01-13 01:06:59.705014 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-13 01:06:59.705020 | orchestrator | Tuesday 13 January 2026 01:04:55 +0000 (0:00:00.374) 0:00:05.450 ******* 2026-01-13 01:06:59.705027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-13 01:06:59.705033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-13 01:06:59.705040 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:06:59.705046 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:06:59.705059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-13 01:06:59.705066 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:06:59.705072 | orchestrator | 2026-01-13 01:06:59.705082 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-13 01:06:59.705089 | orchestrator | Tuesday 13 January 2026 01:04:56 +0000 (0:00:00.823) 0:00:06.273 ******* 2026-01-13 01:06:59.705098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.705103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.705108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.705114 | orchestrator | 2026-01-13 01:06:59.705119 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-13 01:06:59.705124 | orchestrator | Tuesday 13 January 2026 01:04:57 +0000 (0:00:01.107) 0:00:07.380 ******* 2026-01-13 01:06:59.705129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.705135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.705145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.705154 | orchestrator | 2026-01-13 01:06:59.705197 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-13 01:06:59.705205 | orchestrator | Tuesday 13 January 2026 01:04:58 +0000 (0:00:01.135) 0:00:08.515 ******* 2026-01-13 01:06:59.705210 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:06:59.705252 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:06:59.705259 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:06:59.705263 | orchestrator | 2026-01-13 01:06:59.705441 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-13 01:06:59.705453 | orchestrator | Tuesday 13 January 2026 01:04:59 +0000 (0:00:00.515) 0:00:09.031 ******* 2026-01-13 01:06:59.705459 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-13 01:06:59.705465 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-13 01:06:59.705470 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-13 01:06:59.705475 | orchestrator | 2026-01-13 01:06:59.705478 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-13 01:06:59.705481 | orchestrator | Tuesday 13 January 2026 01:05:00 +0000 (0:00:01.200) 0:00:10.231 ******* 2026-01-13 01:06:59.705484 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-13 01:06:59.705545 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-13 01:06:59.705552 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-13 01:06:59.705557 | orchestrator | 2026-01-13 01:06:59.705563 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-13 01:06:59.705568 | orchestrator | Tuesday 13 January 2026 01:05:01 +0000 (0:00:01.187) 0:00:11.419 ******* 2026-01-13 01:06:59.705573 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 01:06:59.705578 | orchestrator | 2026-01-13 01:06:59.705583 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-13 01:06:59.705589 | orchestrator | Tuesday 13 January 2026 01:05:02 +0000 (0:00:00.762) 0:00:12.181 ******* 2026-01-13 01:06:59.705592 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-13 01:06:59.705595 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-13 01:06:59.705598 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:06:59.705602 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:06:59.705605 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:06:59.705608 | orchestrator | 2026-01-13 01:06:59.705611 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-13 01:06:59.705614 | orchestrator | Tuesday 13 January 2026 01:05:03 +0000 (0:00:00.729) 0:00:12.910 ******* 2026-01-13 01:06:59.705617 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:06:59.705620 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:06:59.705623 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:06:59.705626 | orchestrator | 2026-01-13 01:06:59.705630 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-13 01:06:59.705633 | orchestrator | Tuesday 13 January 2026 01:05:03 +0000 (0:00:00.516) 0:00:13.427 ******* 2026-01-13 01:06:59.705636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1317504, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1011682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1317504, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1011682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1317504, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1011682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1317572, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1126423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1317572, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1126423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1317572, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1126423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1317521, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1037252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1317521, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1037252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1317521, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1037252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1317573, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1154642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1317573, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1154642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1317573, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1154642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1317539, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1061764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1317539, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1061764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1317539, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1061764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1317557, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1113179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1317557, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1113179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1317557, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1113179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1317503, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.0994604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1317503, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.0994604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1317503, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.0994604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1317514, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1017954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1317514, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1017954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1317514, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1017954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1317525, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1037955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1317525, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1037955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1317525, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1037955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1317546, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.108681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1317546, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.108681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1317546, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.108681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1317571, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1117954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1317571, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1117954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1317571, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1117954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1317518, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1027954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1317518, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1027954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1317518, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1027954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1317553, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.110036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.705996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1317553, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.110036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1317553, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.110036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1317541, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.107623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1317541, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.107623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1317541, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.107623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1317534, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1061764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1317534, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1061764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1317532, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1048384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1317534, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1061764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1317532, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1048384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1317551, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1087954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1317551, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1087954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1317532, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1048384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1317527, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1044922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1317527, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1044922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1317551, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1087954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1317568, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.111564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1317568, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.111564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1317527, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1044922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1317903, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1910748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1317568, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.111564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1317903, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1910748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1317903, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1910748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1317636, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1327233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1317636, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1327233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1317614, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1209862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1317636, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1327233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1317614, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1209862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1317661, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1317614, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1209862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1317661, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1317589, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1173158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1317661, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1317589, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1173158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1317855, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1827967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1317589, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1173158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1317855, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1827967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1317669, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.143796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1317855, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1827967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1317669, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.143796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1317859, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1837895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1317669, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.143796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1317859, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1837895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1317890, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1897967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1317890, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1897967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1317859, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1837895, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1317854, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1819885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1317854, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1819885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1317890, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1897967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1317652, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1344657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1317652, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1344657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1317854, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1819885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1317625, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.125227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1317625, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.125227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1317652, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1344657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1317649, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1327958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1317649, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1327958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1317625, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.125227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1317616, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1237957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1317649, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1327958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1317616, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1237957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1317655, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1356792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1317655, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1356792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1317616, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1237957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1317880, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1887968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1317880, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1887968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1317655, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1356792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1317870, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.185888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1317870, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.185888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1317880, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1887968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1317597, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1182532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1317597, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1182532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1317870, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.185888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1317605, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1209862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1317605, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1209862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1317597, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1182532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1317682, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1460068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1317682, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1460068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1317863, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.184734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1317605, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1209862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1317863, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.184734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1317682, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.1460068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1317863, 'dev': 125, 'nlink': 1, 'atime': 1768262556.0, 'mtime': 1768262556.0, 'ctime': 1768263336.184734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-13 01:06:59.706585 | orchestrator | 2026-01-13 01:06:59.706589 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-13 01:06:59.706592 | orchestrator | Tuesday 13 January 2026 01:05:39 +0000 (0:00:35.874) 0:00:49.302 ******* 2026-01-13 01:06:59.706598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.706605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.706609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-13 01:06:59.706612 | orchestrator | 2026-01-13 01:06:59.706615 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-13 01:06:59.706618 | orchestrator | Tuesday 13 January 2026 01:05:40 +0000 (0:00:01.050) 0:00:50.352 ******* 2026-01-13 01:06:59.706621 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:06:59.706624 | orchestrator | 2026-01-13 01:06:59.706627 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-13 01:06:59.706631 | orchestrator | Tuesday 13 January 2026 01:05:42 +0000 (0:00:02.302) 0:00:52.654 ******* 2026-01-13 01:06:59.706634 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:06:59.706637 | orchestrator | 2026-01-13 01:06:59.706640 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-13 01:06:59.706643 | orchestrator | Tuesday 13 January 2026 01:05:45 +0000 (0:00:02.216) 0:00:54.871 ******* 2026-01-13 01:06:59.706646 | orchestrator | 2026-01-13 01:06:59.706649 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-13 01:06:59.706652 | orchestrator | Tuesday 13 January 2026 01:05:45 +0000 (0:00:00.063) 0:00:54.934 ******* 2026-01-13 01:06:59.706655 | orchestrator | 2026-01-13 01:06:59.706658 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-13 01:06:59.706661 | orchestrator | Tuesday 13 January 2026 01:05:45 +0000 (0:00:00.060) 0:00:54.994 ******* 2026-01-13 01:06:59.706664 | orchestrator | 2026-01-13 01:06:59.706667 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-13 01:06:59.706670 | orchestrator | Tuesday 13 January 2026 01:05:45 +0000 (0:00:00.225) 0:00:55.219 ******* 2026-01-13 01:06:59.706673 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:06:59.706676 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:06:59.706679 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:06:59.706682 | orchestrator | 2026-01-13 01:06:59.706685 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-13 01:06:59.706688 | orchestrator | Tuesday 13 January 2026 01:05:47 +0000 (0:00:01.858) 0:00:57.078 ******* 2026-01-13 01:06:59.706691 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:06:59.706697 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:06:59.706700 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-13 01:06:59.706704 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-13 01:06:59.706707 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-13 01:06:59.706710 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:06:59.706713 | orchestrator | 2026-01-13 01:06:59.706716 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-13 01:06:59.706719 | orchestrator | Tuesday 13 January 2026 01:06:25 +0000 (0:00:38.331) 0:01:35.409 ******* 2026-01-13 01:06:59.706722 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:06:59.706725 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:06:59.706728 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:06:59.706731 | orchestrator | 2026-01-13 01:06:59.706735 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-13 01:06:59.706738 | orchestrator | Tuesday 13 January 2026 01:06:52 +0000 (0:00:26.777) 0:02:02.187 ******* 2026-01-13 01:06:59.706741 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:06:59.706745 | orchestrator | 2026-01-13 01:06:59.706749 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-13 01:06:59.706757 | orchestrator | Tuesday 13 January 2026 01:06:54 +0000 (0:00:01.950) 0:02:04.138 ******* 2026-01-13 01:06:59.706763 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:06:59.706768 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:06:59.706773 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:06:59.706778 | orchestrator | 2026-01-13 01:06:59.706783 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-13 01:06:59.706787 | orchestrator | Tuesday 13 January 2026 01:06:54 +0000 (0:00:00.361) 0:02:04.499 ******* 2026-01-13 01:06:59.706796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-13 01:06:59.706802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-13 01:06:59.706807 | orchestrator | 2026-01-13 01:06:59.706836 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-13 01:06:59.706842 | orchestrator | Tuesday 13 January 2026 01:06:57 +0000 (0:00:02.545) 0:02:07.045 ******* 2026-01-13 01:06:59.706847 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:06:59.706851 | orchestrator | 2026-01-13 01:06:59.706856 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:06:59.706862 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-13 01:06:59.706868 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-13 01:06:59.706873 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-13 01:06:59.706877 | orchestrator | 2026-01-13 01:06:59.706882 | orchestrator | 2026-01-13 01:06:59.706887 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:06:59.706892 | orchestrator | Tuesday 13 January 2026 01:06:57 +0000 (0:00:00.244) 0:02:07.289 ******* 2026-01-13 01:06:59.706897 | orchestrator | =============================================================================== 2026-01-13 01:06:59.706907 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.33s 2026-01-13 01:06:59.706912 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 35.87s 2026-01-13 01:06:59.706916 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 26.78s 2026-01-13 01:06:59.706921 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.55s 2026-01-13 01:06:59.706926 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.30s 2026-01-13 01:06:59.706931 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.22s 2026-01-13 01:06:59.706935 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 1.95s 2026-01-13 01:06:59.706940 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.86s 2026-01-13 01:06:59.706945 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.34s 2026-01-13 01:06:59.706950 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.20s 2026-01-13 01:06:59.706955 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.19s 2026-01-13 01:06:59.706960 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.14s 2026-01-13 01:06:59.706964 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.11s 2026-01-13 01:06:59.706969 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.05s 2026-01-13 01:06:59.706974 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.82s 2026-01-13 01:06:59.706979 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.82s 2026-01-13 01:06:59.706983 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.76s 2026-01-13 01:06:59.706988 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2026-01-13 01:06:59.706993 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.70s 2026-01-13 01:06:59.706998 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.68s 2026-01-13 01:06:59.707003 | orchestrator | 2026-01-13 01:06:59 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:06:59.707008 | orchestrator | 2026-01-13 01:06:59 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:06:59.707013 | orchestrator | 2026-01-13 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:02.752484 | orchestrator | 2026-01-13 01:07:02 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:02.752848 | orchestrator | 2026-01-13 01:07:02 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:02.752873 | orchestrator | 2026-01-13 01:07:02 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:05.792419 | orchestrator | 2026-01-13 01:07:05 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:05.792467 | orchestrator | 2026-01-13 01:07:05 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:05.792471 | orchestrator | 2026-01-13 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:08.832540 | orchestrator | 2026-01-13 01:07:08 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:08.833204 | orchestrator | 2026-01-13 01:07:08 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:08.833231 | orchestrator | 2026-01-13 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:11.878245 | orchestrator | 2026-01-13 01:07:11 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:11.879762 | orchestrator | 2026-01-13 01:07:11 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:11.879837 | orchestrator | 2026-01-13 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:14.930575 | orchestrator | 2026-01-13 01:07:14 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:14.934280 | orchestrator | 2026-01-13 01:07:14 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:14.934328 | orchestrator | 2026-01-13 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:17.978452 | orchestrator | 2026-01-13 01:07:17 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:17.978563 | orchestrator | 2026-01-13 01:07:17 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:17.979436 | orchestrator | 2026-01-13 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:21.015103 | orchestrator | 2026-01-13 01:07:21 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:21.015582 | orchestrator | 2026-01-13 01:07:21 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:21.015612 | orchestrator | 2026-01-13 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:24.074315 | orchestrator | 2026-01-13 01:07:24 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:24.074362 | orchestrator | 2026-01-13 01:07:24 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:24.074367 | orchestrator | 2026-01-13 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:27.105615 | orchestrator | 2026-01-13 01:07:27 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:27.105667 | orchestrator | 2026-01-13 01:07:27 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:27.105671 | orchestrator | 2026-01-13 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:30.146809 | orchestrator | 2026-01-13 01:07:30 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:30.147909 | orchestrator | 2026-01-13 01:07:30 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:30.148001 | orchestrator | 2026-01-13 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:33.205712 | orchestrator | 2026-01-13 01:07:33 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:33.206495 | orchestrator | 2026-01-13 01:07:33 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:33.206524 | orchestrator | 2026-01-13 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:36.249682 | orchestrator | 2026-01-13 01:07:36 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:36.249736 | orchestrator | 2026-01-13 01:07:36 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:36.249743 | orchestrator | 2026-01-13 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:39.283648 | orchestrator | 2026-01-13 01:07:39 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:39.285460 | orchestrator | 2026-01-13 01:07:39 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:39.285743 | orchestrator | 2026-01-13 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:42.322289 | orchestrator | 2026-01-13 01:07:42 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:42.322754 | orchestrator | 2026-01-13 01:07:42 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:42.322878 | orchestrator | 2026-01-13 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:45.363037 | orchestrator | 2026-01-13 01:07:45 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:45.363092 | orchestrator | 2026-01-13 01:07:45 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:45.363100 | orchestrator | 2026-01-13 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:48.402969 | orchestrator | 2026-01-13 01:07:48 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:48.403830 | orchestrator | 2026-01-13 01:07:48 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:48.403858 | orchestrator | 2026-01-13 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:51.439548 | orchestrator | 2026-01-13 01:07:51 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:51.441521 | orchestrator | 2026-01-13 01:07:51 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:51.441615 | orchestrator | 2026-01-13 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:54.477227 | orchestrator | 2026-01-13 01:07:54 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:54.478121 | orchestrator | 2026-01-13 01:07:54 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:54.478156 | orchestrator | 2026-01-13 01:07:54 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:07:57.514814 | orchestrator | 2026-01-13 01:07:57 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:07:57.517949 | orchestrator | 2026-01-13 01:07:57 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:07:57.518004 | orchestrator | 2026-01-13 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:00.561035 | orchestrator | 2026-01-13 01:08:00 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:00.562646 | orchestrator | 2026-01-13 01:08:00 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:00.562702 | orchestrator | 2026-01-13 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:03.599504 | orchestrator | 2026-01-13 01:08:03 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:03.600426 | orchestrator | 2026-01-13 01:08:03 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:03.600480 | orchestrator | 2026-01-13 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:06.651643 | orchestrator | 2026-01-13 01:08:06 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:06.655530 | orchestrator | 2026-01-13 01:08:06 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:06.655590 | orchestrator | 2026-01-13 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:09.701092 | orchestrator | 2026-01-13 01:08:09 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:09.703374 | orchestrator | 2026-01-13 01:08:09 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:09.703696 | orchestrator | 2026-01-13 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:12.757654 | orchestrator | 2026-01-13 01:08:12 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:12.759801 | orchestrator | 2026-01-13 01:08:12 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:12.760190 | orchestrator | 2026-01-13 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:15.797599 | orchestrator | 2026-01-13 01:08:15 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:15.799727 | orchestrator | 2026-01-13 01:08:15 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:15.799975 | orchestrator | 2026-01-13 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:18.845091 | orchestrator | 2026-01-13 01:08:18 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:18.845674 | orchestrator | 2026-01-13 01:08:18 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:18.845710 | orchestrator | 2026-01-13 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:21.909175 | orchestrator | 2026-01-13 01:08:21 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:21.912650 | orchestrator | 2026-01-13 01:08:21 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:21.912867 | orchestrator | 2026-01-13 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:24.954843 | orchestrator | 2026-01-13 01:08:24 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:24.956996 | orchestrator | 2026-01-13 01:08:24 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:24.957054 | orchestrator | 2026-01-13 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:28.013296 | orchestrator | 2026-01-13 01:08:28 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:28.014235 | orchestrator | 2026-01-13 01:08:28 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:28.014376 | orchestrator | 2026-01-13 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:31.055274 | orchestrator | 2026-01-13 01:08:31 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:31.057484 | orchestrator | 2026-01-13 01:08:31 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:31.057527 | orchestrator | 2026-01-13 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:34.102561 | orchestrator | 2026-01-13 01:08:34 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:34.107509 | orchestrator | 2026-01-13 01:08:34 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:34.107651 | orchestrator | 2026-01-13 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:37.156965 | orchestrator | 2026-01-13 01:08:37 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:37.159034 | orchestrator | 2026-01-13 01:08:37 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:37.159087 | orchestrator | 2026-01-13 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:40.201516 | orchestrator | 2026-01-13 01:08:40 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:40.204819 | orchestrator | 2026-01-13 01:08:40 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:40.204880 | orchestrator | 2026-01-13 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:43.249165 | orchestrator | 2026-01-13 01:08:43 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:43.251193 | orchestrator | 2026-01-13 01:08:43 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:43.251252 | orchestrator | 2026-01-13 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:46.293293 | orchestrator | 2026-01-13 01:08:46 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:46.293356 | orchestrator | 2026-01-13 01:08:46 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:46.293365 | orchestrator | 2026-01-13 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:49.329316 | orchestrator | 2026-01-13 01:08:49 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:49.330566 | orchestrator | 2026-01-13 01:08:49 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:49.330594 | orchestrator | 2026-01-13 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:52.373927 | orchestrator | 2026-01-13 01:08:52 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:52.375101 | orchestrator | 2026-01-13 01:08:52 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:52.375161 | orchestrator | 2026-01-13 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:55.418793 | orchestrator | 2026-01-13 01:08:55 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:55.420545 | orchestrator | 2026-01-13 01:08:55 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:55.420604 | orchestrator | 2026-01-13 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:08:58.464503 | orchestrator | 2026-01-13 01:08:58 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:08:58.466315 | orchestrator | 2026-01-13 01:08:58 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:08:58.466370 | orchestrator | 2026-01-13 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:01.513049 | orchestrator | 2026-01-13 01:09:01 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:01.515249 | orchestrator | 2026-01-13 01:09:01 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:01.515290 | orchestrator | 2026-01-13 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:04.562548 | orchestrator | 2026-01-13 01:09:04 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:04.564166 | orchestrator | 2026-01-13 01:09:04 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:04.564208 | orchestrator | 2026-01-13 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:07.613507 | orchestrator | 2026-01-13 01:09:07 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:07.613561 | orchestrator | 2026-01-13 01:09:07 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:07.613568 | orchestrator | 2026-01-13 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:10.655789 | orchestrator | 2026-01-13 01:09:10 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:10.657255 | orchestrator | 2026-01-13 01:09:10 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:10.657351 | orchestrator | 2026-01-13 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:13.695421 | orchestrator | 2026-01-13 01:09:13 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:13.696510 | orchestrator | 2026-01-13 01:09:13 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:13.696529 | orchestrator | 2026-01-13 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:16.738459 | orchestrator | 2026-01-13 01:09:16 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:16.740471 | orchestrator | 2026-01-13 01:09:16 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:16.740768 | orchestrator | 2026-01-13 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:19.777791 | orchestrator | 2026-01-13 01:09:19 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:19.778888 | orchestrator | 2026-01-13 01:09:19 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:19.778926 | orchestrator | 2026-01-13 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:22.815466 | orchestrator | 2026-01-13 01:09:22 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:22.817125 | orchestrator | 2026-01-13 01:09:22 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:22.817180 | orchestrator | 2026-01-13 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:25.865425 | orchestrator | 2026-01-13 01:09:25 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:25.867078 | orchestrator | 2026-01-13 01:09:25 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:25.868087 | orchestrator | 2026-01-13 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:28.913407 | orchestrator | 2026-01-13 01:09:28 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:28.913966 | orchestrator | 2026-01-13 01:09:28 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:28.913996 | orchestrator | 2026-01-13 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:31.971937 | orchestrator | 2026-01-13 01:09:31 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:31.972497 | orchestrator | 2026-01-13 01:09:31 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:31.972514 | orchestrator | 2026-01-13 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:35.055481 | orchestrator | 2026-01-13 01:09:35 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:35.057251 | orchestrator | 2026-01-13 01:09:35 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:35.057306 | orchestrator | 2026-01-13 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:38.100922 | orchestrator | 2026-01-13 01:09:38 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:38.103211 | orchestrator | 2026-01-13 01:09:38 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:38.103282 | orchestrator | 2026-01-13 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:41.140988 | orchestrator | 2026-01-13 01:09:41 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:41.141489 | orchestrator | 2026-01-13 01:09:41 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:41.141523 | orchestrator | 2026-01-13 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:44.186403 | orchestrator | 2026-01-13 01:09:44 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:44.189207 | orchestrator | 2026-01-13 01:09:44 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:44.189817 | orchestrator | 2026-01-13 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:47.228085 | orchestrator | 2026-01-13 01:09:47 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:47.229813 | orchestrator | 2026-01-13 01:09:47 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:47.229870 | orchestrator | 2026-01-13 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:50.282532 | orchestrator | 2026-01-13 01:09:50 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:50.284891 | orchestrator | 2026-01-13 01:09:50 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:50.284949 | orchestrator | 2026-01-13 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:53.325641 | orchestrator | 2026-01-13 01:09:53 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:53.326506 | orchestrator | 2026-01-13 01:09:53 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:53.326601 | orchestrator | 2026-01-13 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:56.366896 | orchestrator | 2026-01-13 01:09:56 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:56.367812 | orchestrator | 2026-01-13 01:09:56 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:56.367848 | orchestrator | 2026-01-13 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:09:59.407528 | orchestrator | 2026-01-13 01:09:59 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:09:59.408434 | orchestrator | 2026-01-13 01:09:59 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:09:59.408555 | orchestrator | 2026-01-13 01:09:59 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:02.444144 | orchestrator | 2026-01-13 01:10:02 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:02.444599 | orchestrator | 2026-01-13 01:10:02 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:02.444621 | orchestrator | 2026-01-13 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:05.468480 | orchestrator | 2026-01-13 01:10:05 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:05.468528 | orchestrator | 2026-01-13 01:10:05 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:05.468533 | orchestrator | 2026-01-13 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:08.523793 | orchestrator | 2026-01-13 01:10:08 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:08.524601 | orchestrator | 2026-01-13 01:10:08 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:08.524678 | orchestrator | 2026-01-13 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:11.567526 | orchestrator | 2026-01-13 01:10:11 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:11.570191 | orchestrator | 2026-01-13 01:10:11 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:11.570759 | orchestrator | 2026-01-13 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:14.612900 | orchestrator | 2026-01-13 01:10:14 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:14.614916 | orchestrator | 2026-01-13 01:10:14 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:14.614976 | orchestrator | 2026-01-13 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:17.667859 | orchestrator | 2026-01-13 01:10:17 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:17.671180 | orchestrator | 2026-01-13 01:10:17 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:17.671240 | orchestrator | 2026-01-13 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:20.719362 | orchestrator | 2026-01-13 01:10:20 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:20.721600 | orchestrator | 2026-01-13 01:10:20 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:20.721664 | orchestrator | 2026-01-13 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:23.778139 | orchestrator | 2026-01-13 01:10:23 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:23.780029 | orchestrator | 2026-01-13 01:10:23 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:23.780114 | orchestrator | 2026-01-13 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:26.832834 | orchestrator | 2026-01-13 01:10:26 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:26.835177 | orchestrator | 2026-01-13 01:10:26 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:26.835226 | orchestrator | 2026-01-13 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:29.895935 | orchestrator | 2026-01-13 01:10:29 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:29.896995 | orchestrator | 2026-01-13 01:10:29 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:29.897597 | orchestrator | 2026-01-13 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:32.952053 | orchestrator | 2026-01-13 01:10:32 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:32.952428 | orchestrator | 2026-01-13 01:10:32 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:32.952748 | orchestrator | 2026-01-13 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:36.002331 | orchestrator | 2026-01-13 01:10:36 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:36.003007 | orchestrator | 2026-01-13 01:10:36 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:36.003047 | orchestrator | 2026-01-13 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:39.051822 | orchestrator | 2026-01-13 01:10:39 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:39.051899 | orchestrator | 2026-01-13 01:10:39 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:39.051907 | orchestrator | 2026-01-13 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:42.093833 | orchestrator | 2026-01-13 01:10:42 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:42.095921 | orchestrator | 2026-01-13 01:10:42 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:42.096360 | orchestrator | 2026-01-13 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:45.138348 | orchestrator | 2026-01-13 01:10:45 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:45.140557 | orchestrator | 2026-01-13 01:10:45 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state STARTED 2026-01-13 01:10:45.140639 | orchestrator | 2026-01-13 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:48.185157 | orchestrator | 2026-01-13 01:10:48 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:48.190837 | orchestrator | 2026-01-13 01:10:48 | INFO  | Task 40df7405-e8dc-40ca-8ff8-36b16a2c0c2a is in state SUCCESS 2026-01-13 01:10:48.192049 | orchestrator | 2026-01-13 01:10:48.192088 | orchestrator | 2026-01-13 01:10:48.192096 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:10:48.192104 | orchestrator | 2026-01-13 01:10:48.192110 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-01-13 01:10:48.192116 | orchestrator | Tuesday 13 January 2026 01:02:54 +0000 (0:00:00.481) 0:00:00.481 ******* 2026-01-13 01:10:48.192123 | orchestrator | changed: [testbed-manager] 2026-01-13 01:10:48.192130 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.192137 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:10:48.192143 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:10:48.192149 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:10:48.192155 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:10:48.192161 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:10:48.192168 | orchestrator | 2026-01-13 01:10:48.192174 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:10:48.192192 | orchestrator | Tuesday 13 January 2026 01:02:56 +0000 (0:00:01.503) 0:00:01.985 ******* 2026-01-13 01:10:48.192198 | orchestrator | changed: [testbed-manager] 2026-01-13 01:10:48.192204 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.192211 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:10:48.192217 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:10:48.192223 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:10:48.192230 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:10:48.192236 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:10:48.192244 | orchestrator | 2026-01-13 01:10:48.192250 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:10:48.192256 | orchestrator | Tuesday 13 January 2026 01:02:56 +0000 (0:00:00.858) 0:00:02.843 ******* 2026-01-13 01:10:48.192264 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-01-13 01:10:48.192270 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-01-13 01:10:48.192276 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-01-13 01:10:48.192283 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-01-13 01:10:48.192289 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-01-13 01:10:48.192295 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-01-13 01:10:48.192302 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-01-13 01:10:48.192308 | orchestrator | 2026-01-13 01:10:48.192314 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-01-13 01:10:48.192350 | orchestrator | 2026-01-13 01:10:48.192356 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-13 01:10:48.192362 | orchestrator | Tuesday 13 January 2026 01:02:57 +0000 (0:00:00.933) 0:00:03.777 ******* 2026-01-13 01:10:48.192377 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:10:48.192387 | orchestrator | 2026-01-13 01:10:48.192393 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-01-13 01:10:48.192398 | orchestrator | Tuesday 13 January 2026 01:02:58 +0000 (0:00:00.737) 0:00:04.514 ******* 2026-01-13 01:10:48.192404 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-01-13 01:10:48.192409 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-01-13 01:10:48.192447 | orchestrator | 2026-01-13 01:10:48.192453 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-01-13 01:10:48.192458 | orchestrator | Tuesday 13 January 2026 01:03:02 +0000 (0:00:03.999) 0:00:08.514 ******* 2026-01-13 01:10:48.192464 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-13 01:10:48.192469 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-13 01:10:48.192474 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.192488 | orchestrator | 2026-01-13 01:10:48.192498 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-13 01:10:48.192503 | orchestrator | Tuesday 13 January 2026 01:03:07 +0000 (0:00:04.448) 0:00:12.963 ******* 2026-01-13 01:10:48.192508 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.192514 | orchestrator | 2026-01-13 01:10:48.192519 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-01-13 01:10:48.192524 | orchestrator | Tuesday 13 January 2026 01:03:07 +0000 (0:00:00.589) 0:00:13.552 ******* 2026-01-13 01:10:48.192529 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.192535 | orchestrator | 2026-01-13 01:10:48.192540 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-01-13 01:10:48.192545 | orchestrator | Tuesday 13 January 2026 01:03:08 +0000 (0:00:01.133) 0:00:14.686 ******* 2026-01-13 01:10:48.192551 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.192556 | orchestrator | 2026-01-13 01:10:48.192562 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-13 01:10:48.192567 | orchestrator | Tuesday 13 January 2026 01:03:11 +0000 (0:00:02.835) 0:00:17.521 ******* 2026-01-13 01:10:48.192573 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.192578 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.192584 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.192589 | orchestrator | 2026-01-13 01:10:48.192606 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-13 01:10:48.192622 | orchestrator | Tuesday 13 January 2026 01:03:12 +0000 (0:00:00.344) 0:00:17.866 ******* 2026-01-13 01:10:48.192628 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:10:48.192633 | orchestrator | 2026-01-13 01:10:48.192638 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-01-13 01:10:48.192644 | orchestrator | Tuesday 13 January 2026 01:03:43 +0000 (0:00:31.023) 0:00:48.889 ******* 2026-01-13 01:10:48.192649 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.192654 | orchestrator | 2026-01-13 01:10:48.192658 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-13 01:10:48.192735 | orchestrator | Tuesday 13 January 2026 01:03:58 +0000 (0:00:15.495) 0:01:04.384 ******* 2026-01-13 01:10:48.192741 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:10:48.192746 | orchestrator | 2026-01-13 01:10:48.192751 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-13 01:10:48.192756 | orchestrator | Tuesday 13 January 2026 01:04:10 +0000 (0:00:11.758) 0:01:16.143 ******* 2026-01-13 01:10:48.192770 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:10:48.192776 | orchestrator | 2026-01-13 01:10:48.192781 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-01-13 01:10:48.192795 | orchestrator | Tuesday 13 January 2026 01:04:11 +0000 (0:00:01.248) 0:01:17.392 ******* 2026-01-13 01:10:48.192800 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.192805 | orchestrator | 2026-01-13 01:10:48.192810 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-13 01:10:48.192816 | orchestrator | Tuesday 13 January 2026 01:04:12 +0000 (0:00:00.530) 0:01:17.922 ******* 2026-01-13 01:10:48.192821 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:10:48.192826 | orchestrator | 2026-01-13 01:10:48.192831 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-13 01:10:48.192840 | orchestrator | Tuesday 13 January 2026 01:04:12 +0000 (0:00:00.488) 0:01:18.411 ******* 2026-01-13 01:10:48.192845 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:10:48.192849 | orchestrator | 2026-01-13 01:10:48.192854 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-13 01:10:48.192859 | orchestrator | Tuesday 13 January 2026 01:04:30 +0000 (0:00:18.251) 0:01:36.663 ******* 2026-01-13 01:10:48.192863 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.192868 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.192872 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.192877 | orchestrator | 2026-01-13 01:10:48.192882 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-01-13 01:10:48.192887 | orchestrator | 2026-01-13 01:10:48.192892 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-13 01:10:48.192897 | orchestrator | Tuesday 13 January 2026 01:04:31 +0000 (0:00:00.249) 0:01:36.913 ******* 2026-01-13 01:10:48.192902 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:10:48.192908 | orchestrator | 2026-01-13 01:10:48.192913 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-01-13 01:10:48.192919 | orchestrator | Tuesday 13 January 2026 01:04:31 +0000 (0:00:00.442) 0:01:37.355 ******* 2026-01-13 01:10:48.192924 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.192930 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.192935 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.192941 | orchestrator | 2026-01-13 01:10:48.192946 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-01-13 01:10:48.192952 | orchestrator | Tuesday 13 January 2026 01:04:33 +0000 (0:00:02.014) 0:01:39.370 ******* 2026-01-13 01:10:48.192958 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.192988 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.192994 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.192999 | orchestrator | 2026-01-13 01:10:48.193004 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-13 01:10:48.193010 | orchestrator | Tuesday 13 January 2026 01:04:35 +0000 (0:00:02.284) 0:01:41.654 ******* 2026-01-13 01:10:48.193015 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.193021 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193026 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193032 | orchestrator | 2026-01-13 01:10:48.193037 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-13 01:10:48.193042 | orchestrator | Tuesday 13 January 2026 01:04:36 +0000 (0:00:00.361) 0:01:42.016 ******* 2026-01-13 01:10:48.193048 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-13 01:10:48.193053 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193058 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-13 01:10:48.193064 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193069 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-13 01:10:48.193075 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-01-13 01:10:48.193080 | orchestrator | 2026-01-13 01:10:48.193085 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-13 01:10:48.193091 | orchestrator | Tuesday 13 January 2026 01:04:43 +0000 (0:00:06.992) 0:01:49.009 ******* 2026-01-13 01:10:48.193101 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.193106 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193111 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193117 | orchestrator | 2026-01-13 01:10:48.193122 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-13 01:10:48.193128 | orchestrator | Tuesday 13 January 2026 01:04:43 +0000 (0:00:00.321) 0:01:49.331 ******* 2026-01-13 01:10:48.193133 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-13 01:10:48.193138 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.193144 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-13 01:10:48.193149 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193155 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-13 01:10:48.193160 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193165 | orchestrator | 2026-01-13 01:10:48.193170 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-13 01:10:48.193176 | orchestrator | Tuesday 13 January 2026 01:04:44 +0000 (0:00:00.972) 0:01:50.303 ******* 2026-01-13 01:10:48.193181 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193186 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193192 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.193197 | orchestrator | 2026-01-13 01:10:48.193202 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-01-13 01:10:48.193208 | orchestrator | Tuesday 13 January 2026 01:04:45 +0000 (0:00:00.817) 0:01:51.121 ******* 2026-01-13 01:10:48.193213 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193218 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193223 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.193229 | orchestrator | 2026-01-13 01:10:48.193234 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-01-13 01:10:48.193239 | orchestrator | Tuesday 13 January 2026 01:04:46 +0000 (0:00:00.944) 0:01:52.065 ******* 2026-01-13 01:10:48.193245 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193250 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193261 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.193267 | orchestrator | 2026-01-13 01:10:48.193272 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-01-13 01:10:48.193277 | orchestrator | Tuesday 13 January 2026 01:04:48 +0000 (0:00:02.147) 0:01:54.213 ******* 2026-01-13 01:10:48.193282 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193287 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193293 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:10:48.193298 | orchestrator | 2026-01-13 01:10:48.193303 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-13 01:10:48.193309 | orchestrator | Tuesday 13 January 2026 01:05:09 +0000 (0:00:20.709) 0:02:14.922 ******* 2026-01-13 01:10:48.193314 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193319 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193324 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:10:48.193330 | orchestrator | 2026-01-13 01:10:48.193338 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-13 01:10:48.193344 | orchestrator | Tuesday 13 January 2026 01:05:21 +0000 (0:00:12.434) 0:02:27.356 ******* 2026-01-13 01:10:48.193349 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:10:48.193354 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193359 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193365 | orchestrator | 2026-01-13 01:10:48.193370 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-01-13 01:10:48.193375 | orchestrator | Tuesday 13 January 2026 01:05:22 +0000 (0:00:00.840) 0:02:28.197 ******* 2026-01-13 01:10:48.193380 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193386 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193391 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.193399 | orchestrator | 2026-01-13 01:10:48.193405 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-01-13 01:10:48.193410 | orchestrator | Tuesday 13 January 2026 01:05:36 +0000 (0:00:13.770) 0:02:41.968 ******* 2026-01-13 01:10:48.193415 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193421 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.193426 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193431 | orchestrator | 2026-01-13 01:10:48.193436 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-13 01:10:48.193442 | orchestrator | Tuesday 13 January 2026 01:05:37 +0000 (0:00:01.125) 0:02:43.093 ******* 2026-01-13 01:10:48.193447 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.193452 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193457 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193463 | orchestrator | 2026-01-13 01:10:48.193468 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-01-13 01:10:48.193474 | orchestrator | 2026-01-13 01:10:48.193479 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-13 01:10:48.193484 | orchestrator | Tuesday 13 January 2026 01:05:37 +0000 (0:00:00.583) 0:02:43.677 ******* 2026-01-13 01:10:48.193490 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:10:48.193495 | orchestrator | 2026-01-13 01:10:48.193500 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-01-13 01:10:48.193506 | orchestrator | Tuesday 13 January 2026 01:05:38 +0000 (0:00:00.550) 0:02:44.227 ******* 2026-01-13 01:10:48.193511 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-01-13 01:10:48.193517 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-01-13 01:10:48.193522 | orchestrator | 2026-01-13 01:10:48.193528 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-01-13 01:10:48.193533 | orchestrator | Tuesday 13 January 2026 01:05:41 +0000 (0:00:03.403) 0:02:47.630 ******* 2026-01-13 01:10:48.193538 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-01-13 01:10:48.193545 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-01-13 01:10:48.193550 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-01-13 01:10:48.193556 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-01-13 01:10:48.193561 | orchestrator | 2026-01-13 01:10:48.193567 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-01-13 01:10:48.193572 | orchestrator | Tuesday 13 January 2026 01:05:48 +0000 (0:00:06.635) 0:02:54.266 ******* 2026-01-13 01:10:48.193577 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-13 01:10:48.193583 | orchestrator | 2026-01-13 01:10:48.193588 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-01-13 01:10:48.193606 | orchestrator | Tuesday 13 January 2026 01:05:51 +0000 (0:00:02.940) 0:02:57.206 ******* 2026-01-13 01:10:48.193611 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-13 01:10:48.193616 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-01-13 01:10:48.193621 | orchestrator | 2026-01-13 01:10:48.193627 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-01-13 01:10:48.193633 | orchestrator | Tuesday 13 January 2026 01:05:55 +0000 (0:00:03.892) 0:03:01.099 ******* 2026-01-13 01:10:48.193638 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-13 01:10:48.193644 | orchestrator | 2026-01-13 01:10:48.193650 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-01-13 01:10:48.193655 | orchestrator | Tuesday 13 January 2026 01:05:58 +0000 (0:00:03.096) 0:03:04.195 ******* 2026-01-13 01:10:48.193664 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-01-13 01:10:48.193669 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-01-13 01:10:48.193674 | orchestrator | 2026-01-13 01:10:48.193680 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-13 01:10:48.193689 | orchestrator | Tuesday 13 January 2026 01:06:05 +0000 (0:00:07.283) 0:03:11.479 ******* 2026-01-13 01:10:48.193701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.193710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.193717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.193731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.193740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.193746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.193752 | orchestrator | 2026-01-13 01:10:48.193758 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-01-13 01:10:48.193764 | orchestrator | Tuesday 13 January 2026 01:06:07 +0000 (0:00:01.439) 0:03:12.918 ******* 2026-01-13 01:10:48.193769 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.193775 | orchestrator | 2026-01-13 01:10:48.193781 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-01-13 01:10:48.193787 | orchestrator | Tuesday 13 January 2026 01:06:07 +0000 (0:00:00.132) 0:03:13.051 ******* 2026-01-13 01:10:48.193792 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.193798 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193804 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193810 | orchestrator | 2026-01-13 01:10:48.193815 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-01-13 01:10:48.193821 | orchestrator | Tuesday 13 January 2026 01:06:07 +0000 (0:00:00.279) 0:03:13.331 ******* 2026-01-13 01:10:48.193827 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-13 01:10:48.193832 | orchestrator | 2026-01-13 01:10:48.193838 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-01-13 01:10:48.193844 | orchestrator | Tuesday 13 January 2026 01:06:08 +0000 (0:00:00.860) 0:03:14.191 ******* 2026-01-13 01:10:48.193849 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.193855 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.193861 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.193867 | orchestrator | 2026-01-13 01:10:48.193872 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-13 01:10:48.193878 | orchestrator | Tuesday 13 January 2026 01:06:08 +0000 (0:00:00.275) 0:03:14.467 ******* 2026-01-13 01:10:48.193884 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:10:48.193890 | orchestrator | 2026-01-13 01:10:48.193895 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-13 01:10:48.193901 | orchestrator | Tuesday 13 January 2026 01:06:09 +0000 (0:00:00.528) 0:03:14.995 ******* 2026-01-13 01:10:48.193910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.193922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.193929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.193935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.193947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.193956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.193962 | orchestrator | 2026-01-13 01:10:48.193967 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-13 01:10:48.193973 | orchestrator | Tuesday 13 January 2026 01:06:11 +0000 (0:00:02.601) 0:03:17.596 ******* 2026-01-13 01:10:48.193982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-13 01:10:48.193988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.193996 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.194001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-13 01:10:48.194010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.194057 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.194071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-13 01:10:48.194077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.194083 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.194089 | orchestrator | 2026-01-13 01:10:48.194095 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-13 01:10:48.194101 | orchestrator | Tuesday 13 January 2026 01:06:12 +0000 (0:00:00.601) 0:03:18.198 ******* 2026-01-13 01:10:48.194107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-13 01:10:48.194116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.194122 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.194328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-13 01:10:48.194373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.194379 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.194384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-13 01:10:48.194396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.194400 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.194403 | orchestrator | 2026-01-13 01:10:48.194407 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-01-13 01:10:48.194410 | orchestrator | Tuesday 13 January 2026 01:06:13 +0000 (0:00:00.806) 0:03:19.005 ******* 2026-01-13 01:10:48.194422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.194429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.194436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.194442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.194456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.194465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.194470 | orchestrator | 2026-01-13 01:10:48.194474 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-01-13 01:10:48.194479 | orchestrator | Tuesday 13 January 2026 01:06:16 +0000 (0:00:02.911) 0:03:21.916 ******* 2026-01-13 01:10:48.194484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.194493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.194503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.194524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.194532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.194542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.194548 | orchestrator | 2026-01-13 01:10:48.194552 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-01-13 01:10:48.194563 | orchestrator | Tuesday 13 January 2026 01:06:21 +0000 (0:00:05.339) 0:03:27.255 ******* 2026-01-13 01:10:48.194569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-13 01:10:48.194579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.194584 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.194606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-13 01:10:48.194621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.194627 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.194640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-13 01:10:48.194646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.194652 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.194658 | orchestrator | 2026-01-13 01:10:48.194665 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-01-13 01:10:48.194670 | orchestrator | Tuesday 13 January 2026 01:06:21 +0000 (0:00:00.572) 0:03:27.828 ******* 2026-01-13 01:10:48.194675 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.194680 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:10:48.194685 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:10:48.194691 | orchestrator | 2026-01-13 01:10:48.194708 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-01-13 01:10:48.194717 | orchestrator | Tuesday 13 January 2026 01:06:23 +0000 (0:00:01.422) 0:03:29.250 ******* 2026-01-13 01:10:48.194722 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.194728 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.194734 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.194740 | orchestrator | 2026-01-13 01:10:48.194746 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-01-13 01:10:48.194753 | orchestrator | Tuesday 13 January 2026 01:06:23 +0000 (0:00:00.329) 0:03:29.580 ******* 2026-01-13 01:10:48.194760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.194770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.194778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-13 01:10:48.194783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.194792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.194796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.194802 | orchestrator | 2026-01-13 01:10:48.194811 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-13 01:10:48.194817 | orchestrator | Tuesday 13 January 2026 01:06:25 +0000 (0:00:02.118) 0:03:31.699 ******* 2026-01-13 01:10:48.194822 | orchestrator | 2026-01-13 01:10:48.194827 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-13 01:10:48.194832 | orchestrator | Tuesday 13 January 2026 01:06:25 +0000 (0:00:00.139) 0:03:31.839 ******* 2026-01-13 01:10:48.194837 | orchestrator | 2026-01-13 01:10:48.194843 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-13 01:10:48.194848 | orchestrator | Tuesday 13 January 2026 01:06:26 +0000 (0:00:00.135) 0:03:31.974 ******* 2026-01-13 01:10:48.194853 | orchestrator | 2026-01-13 01:10:48.194858 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-01-13 01:10:48.194862 | orchestrator | Tuesday 13 January 2026 01:06:26 +0000 (0:00:00.152) 0:03:32.126 ******* 2026-01-13 01:10:48.194868 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.194874 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:10:48.194880 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:10:48.194885 | orchestrator | 2026-01-13 01:10:48.194889 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-01-13 01:10:48.194895 | orchestrator | Tuesday 13 January 2026 01:06:41 +0000 (0:00:15.330) 0:03:47.456 ******* 2026-01-13 01:10:48.194900 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:10:48.194905 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.194910 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:10:48.194915 | orchestrator | 2026-01-13 01:10:48.194925 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-01-13 01:10:48.194935 | orchestrator | 2026-01-13 01:10:48.194946 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-13 01:10:48.194957 | orchestrator | Tuesday 13 January 2026 01:06:52 +0000 (0:00:11.102) 0:03:58.559 ******* 2026-01-13 01:10:48.194968 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:10:48.194980 | orchestrator | 2026-01-13 01:10:48.194989 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-13 01:10:48.194999 | orchestrator | Tuesday 13 January 2026 01:06:54 +0000 (0:00:01.376) 0:03:59.935 ******* 2026-01-13 01:10:48.195009 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.195019 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.195028 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.195039 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.195061 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.195072 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.195083 | orchestrator | 2026-01-13 01:10:48.195092 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-01-13 01:10:48.195098 | orchestrator | Tuesday 13 January 2026 01:06:54 +0000 (0:00:00.523) 0:04:00.459 ******* 2026-01-13 01:10:48.195103 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.195108 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.195114 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.195120 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 01:10:48.195130 | orchestrator | 2026-01-13 01:10:48.195141 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-13 01:10:48.195157 | orchestrator | Tuesday 13 January 2026 01:06:55 +0000 (0:00:00.901) 0:04:01.360 ******* 2026-01-13 01:10:48.195169 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-01-13 01:10:48.195178 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-01-13 01:10:48.195189 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-01-13 01:10:48.195199 | orchestrator | 2026-01-13 01:10:48.195209 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-13 01:10:48.195221 | orchestrator | Tuesday 13 January 2026 01:06:56 +0000 (0:00:00.602) 0:04:01.963 ******* 2026-01-13 01:10:48.195232 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-01-13 01:10:48.195243 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-01-13 01:10:48.195254 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-01-13 01:10:48.195264 | orchestrator | 2026-01-13 01:10:48.195275 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-13 01:10:48.195292 | orchestrator | Tuesday 13 January 2026 01:06:57 +0000 (0:00:01.192) 0:04:03.155 ******* 2026-01-13 01:10:48.195303 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-01-13 01:10:48.195315 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.195324 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-01-13 01:10:48.195336 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.195346 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-01-13 01:10:48.195353 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.195358 | orchestrator | 2026-01-13 01:10:48.195363 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-01-13 01:10:48.195368 | orchestrator | Tuesday 13 January 2026 01:06:57 +0000 (0:00:00.443) 0:04:03.599 ******* 2026-01-13 01:10:48.195373 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-13 01:10:48.195379 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-13 01:10:48.195384 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.195389 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-13 01:10:48.195395 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-13 01:10:48.195399 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.195405 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-13 01:10:48.195410 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-13 01:10:48.195415 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.195420 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-13 01:10:48.195425 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-13 01:10:48.195430 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-13 01:10:48.195436 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-13 01:10:48.195441 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-13 01:10:48.195452 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-13 01:10:48.195458 | orchestrator | 2026-01-13 01:10:48.195463 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-01-13 01:10:48.195467 | orchestrator | Tuesday 13 January 2026 01:06:58 +0000 (0:00:01.225) 0:04:04.824 ******* 2026-01-13 01:10:48.195472 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.195477 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.195482 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.195487 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:10:48.195492 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:10:48.195497 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:10:48.195502 | orchestrator | 2026-01-13 01:10:48.195507 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-01-13 01:10:48.195512 | orchestrator | Tuesday 13 January 2026 01:07:00 +0000 (0:00:01.134) 0:04:05.958 ******* 2026-01-13 01:10:48.195517 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.195522 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.195527 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.195532 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:10:48.195537 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:10:48.195545 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:10:48.195550 | orchestrator | 2026-01-13 01:10:48.195555 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-13 01:10:48.195560 | orchestrator | Tuesday 13 January 2026 01:07:01 +0000 (0:00:01.787) 0:04:07.746 ******* 2026-01-13 01:10:48.195567 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195612 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195636 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195650 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195670 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195683 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195697 | orchestrator | 2026-01-13 01:10:48.195703 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-13 01:10:48.195708 | orchestrator | Tuesday 13 January 2026 01:07:03 +0000 (0:00:02.011) 0:04:09.758 ******* 2026-01-13 01:10:48.195714 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:10:48.195719 | orchestrator | 2026-01-13 01:10:48.195724 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-13 01:10:48.195728 | orchestrator | Tuesday 13 January 2026 01:07:05 +0000 (0:00:01.374) 0:04:11.132 ******* 2026-01-13 01:10:48.195733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195739 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195747 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195768 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195779 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195813 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.195829 | orchestrator | 2026-01-13 01:10:48.195834 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-13 01:10:48.195839 | orchestrator | Tuesday 13 January 2026 01:07:08 +0000 (0:00:03.150) 0:04:14.283 ******* 2026-01-13 01:10:48.195848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-13 01:10:48.195860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-13 01:10:48.195866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.195871 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.195876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-13 01:10:48.195881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-13 01:10:48.195891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-13 01:10:48.195904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.195909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-13 01:10:48.195914 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.195921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.195926 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.195931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-13 01:10:48.195937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.195942 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.195950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-13 01:10:48.195961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.195967 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.195972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-13 01:10:48.195977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.195982 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.195987 | orchestrator | 2026-01-13 01:10:48.195993 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-13 01:10:48.195997 | orchestrator | Tuesday 13 January 2026 01:07:10 +0000 (0:00:01.570) 0:04:15.854 ******* 2026-01-13 01:10:48.196003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-13 01:10:48.196008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-13 01:10:48.196016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.196025 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.196033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-13 01:10:48.196038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-13 01:10:48.196043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-13 01:10:48.196048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.196053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.196062 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.196066 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.196074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-13 01:10:48.196082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-13 01:10:48.196087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.196091 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.196096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-13 01:10:48.196102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.196107 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.196112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-13 01:10:48.196123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.196128 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.196133 | orchestrator | 2026-01-13 01:10:48.196138 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-13 01:10:48.196142 | orchestrator | Tuesday 13 January 2026 01:07:12 +0000 (0:00:02.261) 0:04:18.115 ******* 2026-01-13 01:10:48.196147 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.196152 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.196157 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.196163 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-13 01:10:48.196168 | orchestrator | 2026-01-13 01:10:48.196173 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-01-13 01:10:48.196178 | orchestrator | Tuesday 13 January 2026 01:07:13 +0000 (0:00:01.053) 0:04:19.169 ******* 2026-01-13 01:10:48.196183 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-13 01:10:48.196188 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-13 01:10:48.196193 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-13 01:10:48.196197 | orchestrator | 2026-01-13 01:10:48.196202 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-01-13 01:10:48.196206 | orchestrator | Tuesday 13 January 2026 01:07:14 +0000 (0:00:00.946) 0:04:20.115 ******* 2026-01-13 01:10:48.196211 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-13 01:10:48.196215 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-13 01:10:48.196220 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-13 01:10:48.196224 | orchestrator | 2026-01-13 01:10:48.196229 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-01-13 01:10:48.196234 | orchestrator | Tuesday 13 January 2026 01:07:15 +0000 (0:00:00.874) 0:04:20.990 ******* 2026-01-13 01:10:48.196239 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:10:48.196244 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:10:48.196249 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:10:48.196254 | orchestrator | 2026-01-13 01:10:48.196259 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-01-13 01:10:48.196264 | orchestrator | Tuesday 13 January 2026 01:07:15 +0000 (0:00:00.486) 0:04:21.476 ******* 2026-01-13 01:10:48.196269 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:10:48.196274 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:10:48.196279 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:10:48.196284 | orchestrator | 2026-01-13 01:10:48.196289 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-01-13 01:10:48.196294 | orchestrator | Tuesday 13 January 2026 01:07:16 +0000 (0:00:00.759) 0:04:22.236 ******* 2026-01-13 01:10:48.196298 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-13 01:10:48.196303 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-13 01:10:48.196308 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-13 01:10:48.196317 | orchestrator | 2026-01-13 01:10:48.196321 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-01-13 01:10:48.196326 | orchestrator | Tuesday 13 January 2026 01:07:17 +0000 (0:00:01.050) 0:04:23.287 ******* 2026-01-13 01:10:48.196331 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-13 01:10:48.196336 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-13 01:10:48.196340 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-13 01:10:48.196345 | orchestrator | 2026-01-13 01:10:48.196349 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-01-13 01:10:48.196354 | orchestrator | Tuesday 13 January 2026 01:07:18 +0000 (0:00:01.071) 0:04:24.358 ******* 2026-01-13 01:10:48.196359 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-13 01:10:48.196364 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-13 01:10:48.196368 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-13 01:10:48.196373 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-01-13 01:10:48.196378 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-01-13 01:10:48.196383 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-01-13 01:10:48.196388 | orchestrator | 2026-01-13 01:10:48.196393 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-01-13 01:10:48.196397 | orchestrator | Tuesday 13 January 2026 01:07:21 +0000 (0:00:03.291) 0:04:27.650 ******* 2026-01-13 01:10:48.196402 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.196407 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.196412 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.196417 | orchestrator | 2026-01-13 01:10:48.196421 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-01-13 01:10:48.196426 | orchestrator | Tuesday 13 January 2026 01:07:22 +0000 (0:00:00.545) 0:04:28.195 ******* 2026-01-13 01:10:48.196431 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.196435 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.196440 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.196445 | orchestrator | 2026-01-13 01:10:48.196450 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-01-13 01:10:48.196455 | orchestrator | Tuesday 13 January 2026 01:07:22 +0000 (0:00:00.310) 0:04:28.506 ******* 2026-01-13 01:10:48.196459 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:10:48.196464 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:10:48.196469 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:10:48.196474 | orchestrator | 2026-01-13 01:10:48.196479 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-01-13 01:10:48.196484 | orchestrator | Tuesday 13 January 2026 01:07:23 +0000 (0:00:01.119) 0:04:29.625 ******* 2026-01-13 01:10:48.196502 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-13 01:10:48.196508 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-13 01:10:48.196513 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-13 01:10:48.196519 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-13 01:10:48.196524 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-13 01:10:48.196533 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-13 01:10:48.196538 | orchestrator | 2026-01-13 01:10:48.196543 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-01-13 01:10:48.196551 | orchestrator | Tuesday 13 January 2026 01:07:26 +0000 (0:00:03.072) 0:04:32.697 ******* 2026-01-13 01:10:48.196556 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-13 01:10:48.196561 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-13 01:10:48.196566 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-13 01:10:48.196571 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-13 01:10:48.196576 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:10:48.196581 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-13 01:10:48.196586 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:10:48.196590 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-13 01:10:48.196666 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:10:48.196672 | orchestrator | 2026-01-13 01:10:48.196677 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-01-13 01:10:48.196682 | orchestrator | Tuesday 13 January 2026 01:07:29 +0000 (0:00:02.996) 0:04:35.694 ******* 2026-01-13 01:10:48.196686 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.196691 | orchestrator | 2026-01-13 01:10:48.196696 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-01-13 01:10:48.196701 | orchestrator | Tuesday 13 January 2026 01:07:29 +0000 (0:00:00.126) 0:04:35.821 ******* 2026-01-13 01:10:48.196706 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.196711 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.196715 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.196720 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.196725 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.196730 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.196735 | orchestrator | 2026-01-13 01:10:48.196740 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-01-13 01:10:48.196744 | orchestrator | Tuesday 13 January 2026 01:07:30 +0000 (0:00:00.594) 0:04:36.416 ******* 2026-01-13 01:10:48.196749 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-13 01:10:48.196754 | orchestrator | 2026-01-13 01:10:48.196759 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-01-13 01:10:48.196764 | orchestrator | Tuesday 13 January 2026 01:07:31 +0000 (0:00:00.676) 0:04:37.092 ******* 2026-01-13 01:10:48.196769 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.196774 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.196779 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.196784 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.196789 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.196794 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.196799 | orchestrator | 2026-01-13 01:10:48.196804 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-01-13 01:10:48.196809 | orchestrator | Tuesday 13 January 2026 01:07:32 +0000 (0:00:00.833) 0:04:37.925 ******* 2026-01-13 01:10:48.196815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196849 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196859 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196878 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196882 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196896 | orchestrator | 2026-01-13 01:10:48.196901 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-01-13 01:10:48.196905 | orchestrator | Tuesday 13 January 2026 01:07:35 +0000 (0:00:03.308) 0:04:41.234 ******* 2026-01-13 01:10:48.196908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-13 01:10:48.196912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-13 01:10:48.196915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-13 01:10:48.196919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-13 01:10:48.196927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-13 01:10:48.196933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-13 01:10:48.196936 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196939 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196948 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.196971 | orchestrator | 2026-01-13 01:10:48.196977 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-01-13 01:10:48.196980 | orchestrator | Tuesday 13 January 2026 01:07:41 +0000 (0:00:06.347) 0:04:47.582 ******* 2026-01-13 01:10:48.196983 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.196986 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.196990 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.196993 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.196996 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.196999 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.197003 | orchestrator | 2026-01-13 01:10:48.197006 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-01-13 01:10:48.197009 | orchestrator | Tuesday 13 January 2026 01:07:43 +0000 (0:00:01.393) 0:04:48.975 ******* 2026-01-13 01:10:48.197012 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-13 01:10:48.197015 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-13 01:10:48.197018 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-13 01:10:48.197021 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-13 01:10:48.197024 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-13 01:10:48.197027 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-13 01:10:48.197031 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.197034 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-13 01:10:48.197037 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.197043 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-13 01:10:48.197046 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.197049 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-13 01:10:48.197052 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-13 01:10:48.197055 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-13 01:10:48.197059 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-13 01:10:48.197062 | orchestrator | 2026-01-13 01:10:48.197065 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-01-13 01:10:48.197070 | orchestrator | Tuesday 13 January 2026 01:07:46 +0000 (0:00:03.656) 0:04:52.631 ******* 2026-01-13 01:10:48.197073 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.197076 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.197079 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.197082 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.197085 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.197088 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.197092 | orchestrator | 2026-01-13 01:10:48.197097 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-01-13 01:10:48.197102 | orchestrator | Tuesday 13 January 2026 01:07:47 +0000 (0:00:00.613) 0:04:53.245 ******* 2026-01-13 01:10:48.197107 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-13 01:10:48.197112 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-13 01:10:48.197117 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-13 01:10:48.197123 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-13 01:10:48.197134 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-13 01:10:48.197139 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-13 01:10:48.197144 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-13 01:10:48.197149 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-13 01:10:48.197154 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-13 01:10:48.197160 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-13 01:10:48.197163 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.197166 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-13 01:10:48.197169 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.197172 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-13 01:10:48.197176 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.197182 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-13 01:10:48.197186 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-13 01:10:48.197191 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-13 01:10:48.197196 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-13 01:10:48.197201 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-13 01:10:48.197206 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-13 01:10:48.197210 | orchestrator | 2026-01-13 01:10:48.197214 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-01-13 01:10:48.197218 | orchestrator | Tuesday 13 January 2026 01:07:52 +0000 (0:00:05.086) 0:04:58.332 ******* 2026-01-13 01:10:48.197223 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-13 01:10:48.197227 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-13 01:10:48.197232 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-13 01:10:48.197236 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-13 01:10:48.197240 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-13 01:10:48.197245 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-13 01:10:48.197254 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-13 01:10:48.197258 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-13 01:10:48.197263 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-13 01:10:48.197267 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-13 01:10:48.197272 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-13 01:10:48.197277 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-13 01:10:48.197282 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-13 01:10:48.197295 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.197300 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-13 01:10:48.197305 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-13 01:10:48.197309 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.197314 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-13 01:10:48.197319 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.197323 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-13 01:10:48.197328 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-13 01:10:48.197333 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-13 01:10:48.197338 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-13 01:10:48.197344 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-13 01:10:48.197349 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-13 01:10:48.197354 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-13 01:10:48.197359 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-13 01:10:48.197365 | orchestrator | 2026-01-13 01:10:48.197369 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-01-13 01:10:48.197374 | orchestrator | Tuesday 13 January 2026 01:07:58 +0000 (0:00:06.504) 0:05:04.836 ******* 2026-01-13 01:10:48.197379 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.197384 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.197387 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.197390 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.197393 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.197396 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.197399 | orchestrator | 2026-01-13 01:10:48.197402 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-01-13 01:10:48.197405 | orchestrator | Tuesday 13 January 2026 01:07:59 +0000 (0:00:00.796) 0:05:05.633 ******* 2026-01-13 01:10:48.197409 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.197414 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.197423 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.197429 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.197434 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.197439 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.197444 | orchestrator | 2026-01-13 01:10:48.197449 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-01-13 01:10:48.197453 | orchestrator | Tuesday 13 January 2026 01:08:00 +0000 (0:00:00.575) 0:05:06.209 ******* 2026-01-13 01:10:48.197458 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.197463 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.197468 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.197472 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:10:48.197477 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:10:48.197482 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:10:48.197487 | orchestrator | 2026-01-13 01:10:48.197492 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-01-13 01:10:48.197498 | orchestrator | Tuesday 13 January 2026 01:08:02 +0000 (0:00:02.045) 0:05:08.254 ******* 2026-01-13 01:10:48.197506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-13 01:10:48.197524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-13 01:10:48.197535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.197540 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.197545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-13 01:10:48.197551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-13 01:10:48.197557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.197567 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.197576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-13 01:10:48.197585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-13 01:10:48.197590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.197626 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.197631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-13 01:10:48.197636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.197641 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.197645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-13 01:10:48.197653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-13 01:10:48.197662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.197666 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.197671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-13 01:10:48.197675 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.197678 | orchestrator | 2026-01-13 01:10:48.197681 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-01-13 01:10:48.197685 | orchestrator | Tuesday 13 January 2026 01:08:03 +0000 (0:00:01.211) 0:05:09.466 ******* 2026-01-13 01:10:48.197688 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-13 01:10:48.197691 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-13 01:10:48.197694 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.197698 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-13 01:10:48.197701 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-13 01:10:48.197704 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.197707 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-13 01:10:48.197710 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-13 01:10:48.197713 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.197716 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-13 01:10:48.197719 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-13 01:10:48.197722 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.197725 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-13 01:10:48.197729 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-13 01:10:48.197732 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.197735 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-13 01:10:48.197740 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-13 01:10:48.197743 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.197746 | orchestrator | 2026-01-13 01:10:48.197749 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-01-13 01:10:48.197752 | orchestrator | Tuesday 13 January 2026 01:08:04 +0000 (0:00:00.834) 0:05:10.300 ******* 2026-01-13 01:10:48.197756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197762 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197768 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197787 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197798 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197814 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-13 01:10:48.197824 | orchestrator | 2026-01-13 01:10:48.197827 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-13 01:10:48.197832 | orchestrator | Tuesday 13 January 2026 01:08:07 +0000 (0:00:03.009) 0:05:13.309 ******* 2026-01-13 01:10:48.197835 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.197838 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.197841 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.197844 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.197847 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.197850 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.197853 | orchestrator | 2026-01-13 01:10:48.197857 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-13 01:10:48.197860 | orchestrator | Tuesday 13 January 2026 01:08:08 +0000 (0:00:00.745) 0:05:14.055 ******* 2026-01-13 01:10:48.197863 | orchestrator | 2026-01-13 01:10:48.197866 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-13 01:10:48.197869 | orchestrator | Tuesday 13 January 2026 01:08:08 +0000 (0:00:00.134) 0:05:14.189 ******* 2026-01-13 01:10:48.197872 | orchestrator | 2026-01-13 01:10:48.197875 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-13 01:10:48.197881 | orchestrator | Tuesday 13 January 2026 01:08:08 +0000 (0:00:00.130) 0:05:14.320 ******* 2026-01-13 01:10:48.197884 | orchestrator | 2026-01-13 01:10:48.197887 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-13 01:10:48.197890 | orchestrator | Tuesday 13 January 2026 01:08:08 +0000 (0:00:00.135) 0:05:14.456 ******* 2026-01-13 01:10:48.197893 | orchestrator | 2026-01-13 01:10:48.197896 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-13 01:10:48.197899 | orchestrator | Tuesday 13 January 2026 01:08:08 +0000 (0:00:00.137) 0:05:14.594 ******* 2026-01-13 01:10:48.197903 | orchestrator | 2026-01-13 01:10:48.197906 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-13 01:10:48.197909 | orchestrator | Tuesday 13 January 2026 01:08:08 +0000 (0:00:00.129) 0:05:14.723 ******* 2026-01-13 01:10:48.197913 | orchestrator | 2026-01-13 01:10:48.197916 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-01-13 01:10:48.197920 | orchestrator | Tuesday 13 January 2026 01:08:09 +0000 (0:00:00.362) 0:05:15.086 ******* 2026-01-13 01:10:48.197923 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:10:48.197927 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:10:48.197930 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.197934 | orchestrator | 2026-01-13 01:10:48.197937 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-01-13 01:10:48.197941 | orchestrator | Tuesday 13 January 2026 01:08:18 +0000 (0:00:09.493) 0:05:24.580 ******* 2026-01-13 01:10:48.197944 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.197947 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:10:48.197951 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:10:48.197954 | orchestrator | 2026-01-13 01:10:48.197958 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-01-13 01:10:48.197961 | orchestrator | Tuesday 13 January 2026 01:08:29 +0000 (0:00:10.888) 0:05:35.468 ******* 2026-01-13 01:10:48.197965 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:10:48.197968 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:10:48.197971 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:10:48.197975 | orchestrator | 2026-01-13 01:10:48.197979 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-01-13 01:10:48.197982 | orchestrator | Tuesday 13 January 2026 01:08:44 +0000 (0:00:15.133) 0:05:50.602 ******* 2026-01-13 01:10:48.197986 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:10:48.197989 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:10:48.197993 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:10:48.197996 | orchestrator | 2026-01-13 01:10:48.198000 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-01-13 01:10:48.198003 | orchestrator | Tuesday 13 January 2026 01:09:09 +0000 (0:00:24.319) 0:06:14.922 ******* 2026-01-13 01:10:48.198007 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-01-13 01:10:48.198010 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-01-13 01:10:48.198039 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-01-13 01:10:48.198043 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:10:48.198046 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:10:48.198050 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:10:48.198053 | orchestrator | 2026-01-13 01:10:48.198057 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-01-13 01:10:48.198060 | orchestrator | Tuesday 13 January 2026 01:09:15 +0000 (0:00:06.168) 0:06:21.090 ******* 2026-01-13 01:10:48.198064 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:10:48.198067 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:10:48.198071 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:10:48.198074 | orchestrator | 2026-01-13 01:10:48.198078 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-01-13 01:10:48.198085 | orchestrator | Tuesday 13 January 2026 01:09:15 +0000 (0:00:00.751) 0:06:21.842 ******* 2026-01-13 01:10:48.198088 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:10:48.198093 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:10:48.198099 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:10:48.198104 | orchestrator | 2026-01-13 01:10:48.198114 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-01-13 01:10:48.198120 | orchestrator | Tuesday 13 January 2026 01:09:38 +0000 (0:00:22.420) 0:06:44.262 ******* 2026-01-13 01:10:48.198126 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.198132 | orchestrator | 2026-01-13 01:10:48.198138 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-01-13 01:10:48.198144 | orchestrator | Tuesday 13 January 2026 01:09:38 +0000 (0:00:00.123) 0:06:44.386 ******* 2026-01-13 01:10:48.198150 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.198156 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.198162 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.198166 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.198170 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.198176 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-01-13 01:10:48.198180 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-13 01:10:48.198184 | orchestrator | 2026-01-13 01:10:48.198188 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-01-13 01:10:48.198191 | orchestrator | Tuesday 13 January 2026 01:09:59 +0000 (0:00:20.923) 0:07:05.309 ******* 2026-01-13 01:10:48.198195 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.198198 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.198202 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.198205 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.198209 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.198213 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.198216 | orchestrator | 2026-01-13 01:10:48.198219 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-01-13 01:10:48.198223 | orchestrator | Tuesday 13 January 2026 01:10:07 +0000 (0:00:07.722) 0:07:13.031 ******* 2026-01-13 01:10:48.198226 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.198230 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.198233 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.198237 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.198240 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.198244 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-01-13 01:10:48.198247 | orchestrator | 2026-01-13 01:10:48.198251 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-13 01:10:48.198254 | orchestrator | Tuesday 13 January 2026 01:10:10 +0000 (0:00:03.231) 0:07:16.263 ******* 2026-01-13 01:10:48.198258 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-13 01:10:48.198262 | orchestrator | 2026-01-13 01:10:48.198265 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-13 01:10:48.198269 | orchestrator | Tuesday 13 January 2026 01:10:24 +0000 (0:00:13.846) 0:07:30.109 ******* 2026-01-13 01:10:48.198272 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-13 01:10:48.198276 | orchestrator | 2026-01-13 01:10:48.198279 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-01-13 01:10:48.198283 | orchestrator | Tuesday 13 January 2026 01:10:25 +0000 (0:00:01.234) 0:07:31.344 ******* 2026-01-13 01:10:48.198287 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.198290 | orchestrator | 2026-01-13 01:10:48.198294 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-01-13 01:10:48.198297 | orchestrator | Tuesday 13 January 2026 01:10:26 +0000 (0:00:01.284) 0:07:32.629 ******* 2026-01-13 01:10:48.198304 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-13 01:10:48.198308 | orchestrator | 2026-01-13 01:10:48.198311 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-01-13 01:10:48.198314 | orchestrator | Tuesday 13 January 2026 01:10:39 +0000 (0:00:12.681) 0:07:45.310 ******* 2026-01-13 01:10:48.198317 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:10:48.198320 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:10:48.198323 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:10:48.198326 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:10:48.198329 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:10:48.198332 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:10:48.198335 | orchestrator | 2026-01-13 01:10:48.198338 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-01-13 01:10:48.198341 | orchestrator | 2026-01-13 01:10:48.198344 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-01-13 01:10:48.198347 | orchestrator | Tuesday 13 January 2026 01:10:41 +0000 (0:00:01.689) 0:07:46.999 ******* 2026-01-13 01:10:48.198351 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:10:48.198354 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:10:48.198357 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:10:48.198360 | orchestrator | 2026-01-13 01:10:48.198363 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-01-13 01:10:48.198366 | orchestrator | 2026-01-13 01:10:48.198369 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-01-13 01:10:48.198372 | orchestrator | Tuesday 13 January 2026 01:10:42 +0000 (0:00:01.134) 0:07:48.134 ******* 2026-01-13 01:10:48.198375 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.198378 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.198381 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.198384 | orchestrator | 2026-01-13 01:10:48.198387 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-01-13 01:10:48.198390 | orchestrator | 2026-01-13 01:10:48.198393 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-01-13 01:10:48.198396 | orchestrator | Tuesday 13 January 2026 01:10:42 +0000 (0:00:00.530) 0:07:48.664 ******* 2026-01-13 01:10:48.198399 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-01-13 01:10:48.198403 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-13 01:10:48.198406 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-13 01:10:48.198409 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-01-13 01:10:48.198415 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-01-13 01:10:48.198418 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-01-13 01:10:48.198421 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:10:48.198424 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-01-13 01:10:48.198427 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-13 01:10:48.198430 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-13 01:10:48.198433 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-01-13 01:10:48.198436 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-01-13 01:10:48.198439 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-01-13 01:10:48.198442 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:10:48.198445 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-01-13 01:10:48.198452 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-13 01:10:48.198457 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-13 01:10:48.198462 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-01-13 01:10:48.198467 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-01-13 01:10:48.198489 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-01-13 01:10:48.198495 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:10:48.198501 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-01-13 01:10:48.198505 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-13 01:10:48.198509 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-13 01:10:48.198514 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-01-13 01:10:48.198518 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-01-13 01:10:48.198522 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-01-13 01:10:48.198527 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.198531 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-01-13 01:10:48.198535 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-13 01:10:48.198539 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-13 01:10:48.198544 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-01-13 01:10:48.198548 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-01-13 01:10:48.198552 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-01-13 01:10:48.198557 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.198561 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-01-13 01:10:48.198565 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-13 01:10:48.198570 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-13 01:10:48.198574 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-01-13 01:10:48.198579 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-01-13 01:10:48.198583 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-01-13 01:10:48.198588 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.198602 | orchestrator | 2026-01-13 01:10:48.198608 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-01-13 01:10:48.198612 | orchestrator | 2026-01-13 01:10:48.198617 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-01-13 01:10:48.198622 | orchestrator | Tuesday 13 January 2026 01:10:44 +0000 (0:00:01.364) 0:07:50.029 ******* 2026-01-13 01:10:48.198627 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-01-13 01:10:48.198692 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-13 01:10:48.198699 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.198705 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-01-13 01:10:48.198709 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-13 01:10:48.198714 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.198719 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-01-13 01:10:48.198723 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-13 01:10:48.198728 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.198733 | orchestrator | 2026-01-13 01:10:48.198739 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-01-13 01:10:48.198744 | orchestrator | 2026-01-13 01:10:48.198749 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-01-13 01:10:48.198760 | orchestrator | Tuesday 13 January 2026 01:10:44 +0000 (0:00:00.747) 0:07:50.776 ******* 2026-01-13 01:10:48.198769 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.198774 | orchestrator | 2026-01-13 01:10:48.198778 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-01-13 01:10:48.198783 | orchestrator | 2026-01-13 01:10:48.198788 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-01-13 01:10:48.198793 | orchestrator | Tuesday 13 January 2026 01:10:45 +0000 (0:00:00.622) 0:07:51.399 ******* 2026-01-13 01:10:48.198798 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:10:48.198809 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:10:48.198813 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:10:48.198817 | orchestrator | 2026-01-13 01:10:48.198822 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:10:48.198827 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:10:48.198834 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-01-13 01:10:48.198839 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-13 01:10:48.198844 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-13 01:10:48.198850 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-13 01:10:48.198854 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-13 01:10:48.198863 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-13 01:10:48.198869 | orchestrator | 2026-01-13 01:10:48.198874 | orchestrator | 2026-01-13 01:10:48.198879 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:10:48.198884 | orchestrator | Tuesday 13 January 2026 01:10:45 +0000 (0:00:00.409) 0:07:51.808 ******* 2026-01-13 01:10:48.198889 | orchestrator | =============================================================================== 2026-01-13 01:10:48.198895 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.02s 2026-01-13 01:10:48.198899 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 24.32s 2026-01-13 01:10:48.198903 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.42s 2026-01-13 01:10:48.198910 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.92s 2026-01-13 01:10:48.198916 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.71s 2026-01-13 01:10:48.198920 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.25s 2026-01-13 01:10:48.198925 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.50s 2026-01-13 01:10:48.198930 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 15.33s 2026-01-13 01:10:48.198935 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 15.13s 2026-01-13 01:10:48.198940 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.85s 2026-01-13 01:10:48.198945 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.77s 2026-01-13 01:10:48.198950 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.68s 2026-01-13 01:10:48.198955 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.43s 2026-01-13 01:10:48.198960 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.76s 2026-01-13 01:10:48.198965 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.10s 2026-01-13 01:10:48.198970 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 10.89s 2026-01-13 01:10:48.198975 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 9.49s 2026-01-13 01:10:48.198980 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.72s 2026-01-13 01:10:48.198985 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.28s 2026-01-13 01:10:48.198995 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 6.99s 2026-01-13 01:10:48.199004 | orchestrator | 2026-01-13 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:51.242059 | orchestrator | 2026-01-13 01:10:51 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:51.242116 | orchestrator | 2026-01-13 01:10:51 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:54.296909 | orchestrator | 2026-01-13 01:10:54 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:54.296967 | orchestrator | 2026-01-13 01:10:54 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:10:57.344650 | orchestrator | 2026-01-13 01:10:57 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:10:57.344710 | orchestrator | 2026-01-13 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:11:00.388492 | orchestrator | 2026-01-13 01:11:00 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:11:00.388555 | orchestrator | 2026-01-13 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:11:03.432844 | orchestrator | 2026-01-13 01:11:03 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:11:03.432889 | orchestrator | 2026-01-13 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:11:06.483755 | orchestrator | 2026-01-13 01:11:06 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:11:06.483804 | orchestrator | 2026-01-13 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:11:09.535431 | orchestrator | 2026-01-13 01:11:09 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:11:09.535485 | orchestrator | 2026-01-13 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:11:12.581014 | orchestrator | 2026-01-13 01:11:12 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:11:12.581080 | orchestrator | 2026-01-13 01:11:12 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:11:15.626405 | orchestrator | 2026-01-13 01:11:15 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state STARTED 2026-01-13 01:11:15.626455 | orchestrator | 2026-01-13 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-01-13 01:11:18.667407 | orchestrator | 2026-01-13 01:11:18 | INFO  | Task bcd6be5f-ed3e-4ed4-95a6-2b053d117117 is in state SUCCESS 2026-01-13 01:11:18.667594 | orchestrator | 2026-01-13 01:11:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:11:18.668986 | orchestrator | 2026-01-13 01:11:18.669115 | orchestrator | 2026-01-13 01:11:18.669125 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:11:18.669131 | orchestrator | 2026-01-13 01:11:18.669136 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:11:18.669142 | orchestrator | Tuesday 13 January 2026 01:06:54 +0000 (0:00:00.205) 0:00:00.205 ******* 2026-01-13 01:11:18.669148 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:11:18.669153 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:11:18.669159 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:11:18.669164 | orchestrator | 2026-01-13 01:11:18.669169 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:11:18.669175 | orchestrator | Tuesday 13 January 2026 01:06:54 +0000 (0:00:00.246) 0:00:00.451 ******* 2026-01-13 01:11:18.669180 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-01-13 01:11:18.669185 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-01-13 01:11:18.669195 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-01-13 01:11:18.669214 | orchestrator | 2026-01-13 01:11:18.669219 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-01-13 01:11:18.669224 | orchestrator | 2026-01-13 01:11:18.669230 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-13 01:11:18.669235 | orchestrator | Tuesday 13 January 2026 01:06:54 +0000 (0:00:00.382) 0:00:00.834 ******* 2026-01-13 01:11:18.669332 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:11:18.669339 | orchestrator | 2026-01-13 01:11:18.669344 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-01-13 01:11:18.669397 | orchestrator | Tuesday 13 January 2026 01:06:55 +0000 (0:00:00.508) 0:00:01.343 ******* 2026-01-13 01:11:18.669403 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-01-13 01:11:18.669409 | orchestrator | 2026-01-13 01:11:18.669414 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-01-13 01:11:18.669419 | orchestrator | Tuesday 13 January 2026 01:06:58 +0000 (0:00:03.349) 0:00:04.692 ******* 2026-01-13 01:11:18.669551 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-01-13 01:11:18.669614 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-01-13 01:11:18.669619 | orchestrator | 2026-01-13 01:11:18.669622 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-01-13 01:11:18.669625 | orchestrator | Tuesday 13 January 2026 01:07:04 +0000 (0:00:06.076) 0:00:10.768 ******* 2026-01-13 01:11:18.669629 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-13 01:11:18.669632 | orchestrator | 2026-01-13 01:11:18.669635 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-01-13 01:11:18.669638 | orchestrator | Tuesday 13 January 2026 01:07:07 +0000 (0:00:02.841) 0:00:13.609 ******* 2026-01-13 01:11:18.669642 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-13 01:11:18.669645 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-13 01:11:18.669648 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-13 01:11:18.669651 | orchestrator | 2026-01-13 01:11:18.669654 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-01-13 01:11:18.669657 | orchestrator | Tuesday 13 January 2026 01:07:14 +0000 (0:00:06.613) 0:00:20.223 ******* 2026-01-13 01:11:18.669661 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-13 01:11:18.669664 | orchestrator | 2026-01-13 01:11:18.669667 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-01-13 01:11:18.669670 | orchestrator | Tuesday 13 January 2026 01:07:17 +0000 (0:00:02.887) 0:00:23.110 ******* 2026-01-13 01:11:18.669673 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-13 01:11:18.669676 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-13 01:11:18.669679 | orchestrator | 2026-01-13 01:11:18.669682 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-01-13 01:11:18.669685 | orchestrator | Tuesday 13 January 2026 01:07:22 +0000 (0:00:05.787) 0:00:28.898 ******* 2026-01-13 01:11:18.669688 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-01-13 01:11:18.669691 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-01-13 01:11:18.669694 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-01-13 01:11:18.669697 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-01-13 01:11:18.669700 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-01-13 01:11:18.669703 | orchestrator | 2026-01-13 01:11:18.669706 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-13 01:11:18.669709 | orchestrator | Tuesday 13 January 2026 01:07:36 +0000 (0:00:13.548) 0:00:42.447 ******* 2026-01-13 01:11:18.669712 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:11:18.669721 | orchestrator | 2026-01-13 01:11:18.669724 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-01-13 01:11:18.669727 | orchestrator | Tuesday 13 January 2026 01:07:37 +0000 (0:00:00.896) 0:00:43.344 ******* 2026-01-13 01:11:18.669730 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.669733 | orchestrator | 2026-01-13 01:11:18.669736 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-01-13 01:11:18.669739 | orchestrator | Tuesday 13 January 2026 01:07:42 +0000 (0:00:05.143) 0:00:48.488 ******* 2026-01-13 01:11:18.669748 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.669751 | orchestrator | 2026-01-13 01:11:18.669754 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-13 01:11:18.669764 | orchestrator | Tuesday 13 January 2026 01:07:46 +0000 (0:00:04.121) 0:00:52.609 ******* 2026-01-13 01:11:18.669767 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:11:18.669770 | orchestrator | 2026-01-13 01:11:18.669773 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-01-13 01:11:18.669776 | orchestrator | Tuesday 13 January 2026 01:07:49 +0000 (0:00:03.151) 0:00:55.761 ******* 2026-01-13 01:11:18.669779 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-13 01:11:18.669783 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-13 01:11:18.669786 | orchestrator | 2026-01-13 01:11:18.669789 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-01-13 01:11:18.669792 | orchestrator | Tuesday 13 January 2026 01:07:58 +0000 (0:00:09.166) 0:01:04.927 ******* 2026-01-13 01:11:18.669795 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-01-13 01:11:18.669798 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-01-13 01:11:18.669802 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-01-13 01:11:18.669806 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-01-13 01:11:18.669809 | orchestrator | 2026-01-13 01:11:18.669812 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-01-13 01:11:18.669815 | orchestrator | Tuesday 13 January 2026 01:08:13 +0000 (0:00:14.921) 0:01:19.849 ******* 2026-01-13 01:11:18.669818 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.669821 | orchestrator | 2026-01-13 01:11:18.669824 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-01-13 01:11:18.669827 | orchestrator | Tuesday 13 January 2026 01:08:17 +0000 (0:00:03.755) 0:01:23.604 ******* 2026-01-13 01:11:18.669830 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.669833 | orchestrator | 2026-01-13 01:11:18.669836 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-01-13 01:11:18.669839 | orchestrator | Tuesday 13 January 2026 01:08:22 +0000 (0:00:05.177) 0:01:28.782 ******* 2026-01-13 01:11:18.669842 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:11:18.669845 | orchestrator | 2026-01-13 01:11:18.669849 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-01-13 01:11:18.669852 | orchestrator | Tuesday 13 January 2026 01:08:22 +0000 (0:00:00.224) 0:01:29.007 ******* 2026-01-13 01:11:18.669855 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:11:18.669860 | orchestrator | 2026-01-13 01:11:18.669865 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-13 01:11:18.669870 | orchestrator | Tuesday 13 January 2026 01:08:27 +0000 (0:00:04.178) 0:01:33.185 ******* 2026-01-13 01:11:18.669875 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:11:18.669883 | orchestrator | 2026-01-13 01:11:18.669888 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-01-13 01:11:18.669893 | orchestrator | Tuesday 13 January 2026 01:08:28 +0000 (0:00:00.941) 0:01:34.127 ******* 2026-01-13 01:11:18.669899 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:11:18.669904 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.669909 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:11:18.669914 | orchestrator | 2026-01-13 01:11:18.669918 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-01-13 01:11:18.669924 | orchestrator | Tuesday 13 January 2026 01:08:32 +0000 (0:00:04.274) 0:01:38.401 ******* 2026-01-13 01:11:18.669929 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.669934 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:11:18.669937 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:11:18.669940 | orchestrator | 2026-01-13 01:11:18.670062 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-01-13 01:11:18.670075 | orchestrator | Tuesday 13 January 2026 01:08:36 +0000 (0:00:04.455) 0:01:42.856 ******* 2026-01-13 01:11:18.670080 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.670085 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:11:18.670091 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:11:18.670096 | orchestrator | 2026-01-13 01:11:18.670101 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-01-13 01:11:18.670107 | orchestrator | Tuesday 13 January 2026 01:08:37 +0000 (0:00:00.737) 0:01:43.594 ******* 2026-01-13 01:11:18.670112 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:11:18.670118 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:11:18.670123 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:11:18.670128 | orchestrator | 2026-01-13 01:11:18.670132 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-01-13 01:11:18.670137 | orchestrator | Tuesday 13 January 2026 01:08:39 +0000 (0:00:02.051) 0:01:45.646 ******* 2026-01-13 01:11:18.670141 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:11:18.670144 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.670147 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:11:18.670150 | orchestrator | 2026-01-13 01:11:18.670155 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-01-13 01:11:18.670160 | orchestrator | Tuesday 13 January 2026 01:08:40 +0000 (0:00:01.175) 0:01:46.821 ******* 2026-01-13 01:11:18.670165 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.670339 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:11:18.670349 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:11:18.670355 | orchestrator | 2026-01-13 01:11:18.670361 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-01-13 01:11:18.670371 | orchestrator | Tuesday 13 January 2026 01:08:41 +0000 (0:00:01.028) 0:01:47.849 ******* 2026-01-13 01:11:18.670378 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:11:18.670384 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:11:18.670389 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.670395 | orchestrator | 2026-01-13 01:11:18.670419 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-01-13 01:11:18.670424 | orchestrator | Tuesday 13 January 2026 01:08:43 +0000 (0:00:01.793) 0:01:49.643 ******* 2026-01-13 01:11:18.670430 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.670435 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:11:18.670440 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:11:18.670446 | orchestrator | 2026-01-13 01:11:18.670451 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-01-13 01:11:18.670457 | orchestrator | Tuesday 13 January 2026 01:08:45 +0000 (0:00:01.707) 0:01:51.350 ******* 2026-01-13 01:11:18.670462 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:11:18.670468 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:11:18.670473 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:11:18.670478 | orchestrator | 2026-01-13 01:11:18.670484 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-01-13 01:11:18.670496 | orchestrator | Tuesday 13 January 2026 01:08:45 +0000 (0:00:00.609) 0:01:51.959 ******* 2026-01-13 01:11:18.670501 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:11:18.670506 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:11:18.670511 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:11:18.670516 | orchestrator | 2026-01-13 01:11:18.670522 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-13 01:11:18.670527 | orchestrator | Tuesday 13 January 2026 01:08:48 +0000 (0:00:02.448) 0:01:54.408 ******* 2026-01-13 01:11:18.670532 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:11:18.670538 | orchestrator | 2026-01-13 01:11:18.670543 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-01-13 01:11:18.670549 | orchestrator | Tuesday 13 January 2026 01:08:49 +0000 (0:00:00.765) 0:01:55.173 ******* 2026-01-13 01:11:18.670554 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:11:18.670571 | orchestrator | 2026-01-13 01:11:18.670577 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-13 01:11:18.670582 | orchestrator | Tuesday 13 January 2026 01:08:52 +0000 (0:00:03.287) 0:01:58.461 ******* 2026-01-13 01:11:18.670587 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:11:18.670593 | orchestrator | 2026-01-13 01:11:18.670598 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-01-13 01:11:18.670603 | orchestrator | Tuesday 13 January 2026 01:08:55 +0000 (0:00:03.250) 0:02:01.711 ******* 2026-01-13 01:11:18.670609 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-13 01:11:18.670614 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-13 01:11:18.670619 | orchestrator | 2026-01-13 01:11:18.670625 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-01-13 01:11:18.670639 | orchestrator | Tuesday 13 January 2026 01:09:01 +0000 (0:00:05.803) 0:02:07.515 ******* 2026-01-13 01:11:18.670643 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:11:18.670646 | orchestrator | 2026-01-13 01:11:18.670650 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-01-13 01:11:18.670654 | orchestrator | Tuesday 13 January 2026 01:09:04 +0000 (0:00:02.819) 0:02:10.335 ******* 2026-01-13 01:11:18.670657 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:11:18.670660 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:11:18.670664 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:11:18.670667 | orchestrator | 2026-01-13 01:11:18.670671 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-01-13 01:11:18.670675 | orchestrator | Tuesday 13 January 2026 01:09:04 +0000 (0:00:00.299) 0:02:10.635 ******* 2026-01-13 01:11:18.670680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.670700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.670708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.670712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.670716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.670720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.670724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.670728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.670747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.670752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.670756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.670760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.670764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.670768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.670773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.670777 | orchestrator | 2026-01-13 01:11:18.670782 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-01-13 01:11:18.670786 | orchestrator | Tuesday 13 January 2026 01:09:06 +0000 (0:00:02.065) 0:02:12.700 ******* 2026-01-13 01:11:18.670790 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:11:18.670793 | orchestrator | 2026-01-13 01:11:18.670804 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-01-13 01:11:18.670808 | orchestrator | Tuesday 13 January 2026 01:09:06 +0000 (0:00:00.139) 0:02:12.839 ******* 2026-01-13 01:11:18.670811 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:11:18.670815 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:11:18.670818 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:11:18.670822 | orchestrator | 2026-01-13 01:11:18.670826 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-01-13 01:11:18.670829 | orchestrator | Tuesday 13 January 2026 01:09:07 +0000 (0:00:00.481) 0:02:13.321 ******* 2026-01-13 01:11:18.670833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-13 01:11:18.670837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 01:11:18.670841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.670844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.670850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:11:18.670854 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:11:18.670868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-13 01:11:18.670872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 01:11:18.670876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.670880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.670883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:11:18.670889 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:11:18.670892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-13 01:11:18.670906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 01:11:18.670911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.670914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.670918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:11:18.670922 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:11:18.670926 | orchestrator | 2026-01-13 01:11:18.670929 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-13 01:11:18.670933 | orchestrator | Tuesday 13 January 2026 01:09:07 +0000 (0:00:00.691) 0:02:14.013 ******* 2026-01-13 01:11:18.670939 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:11:18.670942 | orchestrator | 2026-01-13 01:11:18.670946 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-01-13 01:11:18.670949 | orchestrator | Tuesday 13 January 2026 01:09:08 +0000 (0:00:00.543) 0:02:14.556 ******* 2026-01-13 01:11:18.670953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.670967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.670971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.670975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.670979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.670986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.670990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.670995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671034 | orchestrator | 2026-01-13 01:11:18.671038 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-01-13 01:11:18.671041 | orchestrator | Tuesday 13 January 2026 01:09:13 +0000 (0:00:04.758) 0:02:19.314 ******* 2026-01-13 01:11:18.671045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-13 01:11:18.671049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 01:11:18.671055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.671059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.671063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:11:18.671066 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:11:18.671074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-13 01:11:18.671078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 01:11:18.671082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.671088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.671092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:11:18.671096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-13 01:11:18.671101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 01:11:18.671105 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:11:18.671110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.671114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.671121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:11:18.671124 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:11:18.671128 | orchestrator | 2026-01-13 01:11:18.671131 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-01-13 01:11:18.671135 | orchestrator | Tuesday 13 January 2026 01:09:14 +0000 (0:00:00.955) 0:02:20.270 ******* 2026-01-13 01:11:18.671139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-13 01:11:18.671142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 01:11:18.671148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.671153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.671157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:11:18.671163 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:11:18.671167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-13 01:11:18.671170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 01:11:18.671174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-13 01:11:18.671181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.671185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-13 01:11:18.671189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.671195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.671199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:11:18.671202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-13 01:11:18.671206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-13 01:11:18.671209 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:11:18.671213 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:11:18.671217 | orchestrator | 2026-01-13 01:11:18.671220 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-01-13 01:11:18.671224 | orchestrator | Tuesday 13 January 2026 01:09:15 +0000 (0:00:00.844) 0:02:21.115 ******* 2026-01-13 01:11:18.671231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.671238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.671242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.671245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.671249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.671254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.671260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671308 | orchestrator | 2026-01-13 01:11:18.671311 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-01-13 01:11:18.671315 | orchestrator | Tuesday 13 January 2026 01:09:19 +0000 (0:00:04.640) 0:02:25.755 ******* 2026-01-13 01:11:18.671318 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-13 01:11:18.671322 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-13 01:11:18.671326 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-13 01:11:18.671329 | orchestrator | 2026-01-13 01:11:18.671333 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-01-13 01:11:18.671336 | orchestrator | Tuesday 13 January 2026 01:09:22 +0000 (0:00:02.388) 0:02:28.143 ******* 2026-01-13 01:11:18.671340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.671344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.671353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.671358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.671362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.671365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.671369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671410 | orchestrator | 2026-01-13 01:11:18.671414 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-01-13 01:11:18.671417 | orchestrator | Tuesday 13 January 2026 01:09:38 +0000 (0:00:16.314) 0:02:44.457 ******* 2026-01-13 01:11:18.671421 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.671424 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:11:18.671428 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:11:18.671431 | orchestrator | 2026-01-13 01:11:18.671435 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-01-13 01:11:18.671438 | orchestrator | Tuesday 13 January 2026 01:09:39 +0000 (0:00:01.495) 0:02:45.953 ******* 2026-01-13 01:11:18.671443 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-13 01:11:18.671447 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-13 01:11:18.671452 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-13 01:11:18.671455 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-13 01:11:18.671459 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-13 01:11:18.671463 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-13 01:11:18.671466 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-13 01:11:18.671470 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-13 01:11:18.671473 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-13 01:11:18.671477 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-13 01:11:18.671480 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-13 01:11:18.671484 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-13 01:11:18.671487 | orchestrator | 2026-01-13 01:11:18.671491 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-01-13 01:11:18.671494 | orchestrator | Tuesday 13 January 2026 01:09:46 +0000 (0:00:06.086) 0:02:52.039 ******* 2026-01-13 01:11:18.671498 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-13 01:11:18.671501 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-13 01:11:18.671505 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-13 01:11:18.671508 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-13 01:11:18.671512 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-13 01:11:18.671515 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-13 01:11:18.671518 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-13 01:11:18.671522 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-13 01:11:18.671525 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-13 01:11:18.671529 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-13 01:11:18.671533 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-13 01:11:18.671536 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-13 01:11:18.671540 | orchestrator | 2026-01-13 01:11:18.671543 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-01-13 01:11:18.671547 | orchestrator | Tuesday 13 January 2026 01:09:51 +0000 (0:00:05.063) 0:02:57.103 ******* 2026-01-13 01:11:18.671550 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-13 01:11:18.671554 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-13 01:11:18.671568 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-13 01:11:18.671574 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-13 01:11:18.671580 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-13 01:11:18.671588 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-13 01:11:18.671594 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-13 01:11:18.671600 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-13 01:11:18.671606 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-13 01:11:18.671612 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-13 01:11:18.671618 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-13 01:11:18.671622 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-13 01:11:18.671625 | orchestrator | 2026-01-13 01:11:18.671628 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-01-13 01:11:18.671632 | orchestrator | Tuesday 13 January 2026 01:09:55 +0000 (0:00:04.638) 0:03:01.741 ******* 2026-01-13 01:11:18.671637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.671651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.671659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-13 01:11:18.671665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.671673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.671679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-13 01:11:18.671684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-13 01:11:18.671744 | orchestrator | 2026-01-13 01:11:18.671747 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-13 01:11:18.671751 | orchestrator | Tuesday 13 January 2026 01:09:59 +0000 (0:00:03.506) 0:03:05.248 ******* 2026-01-13 01:11:18.671754 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:11:18.671758 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:11:18.671761 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:11:18.671765 | orchestrator | 2026-01-13 01:11:18.671768 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-01-13 01:11:18.671772 | orchestrator | Tuesday 13 January 2026 01:09:59 +0000 (0:00:00.297) 0:03:05.545 ******* 2026-01-13 01:11:18.671775 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.671779 | orchestrator | 2026-01-13 01:11:18.671782 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-01-13 01:11:18.671786 | orchestrator | Tuesday 13 January 2026 01:10:01 +0000 (0:00:01.830) 0:03:07.375 ******* 2026-01-13 01:11:18.671789 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.671793 | orchestrator | 2026-01-13 01:11:18.671796 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-01-13 01:11:18.671801 | orchestrator | Tuesday 13 January 2026 01:10:03 +0000 (0:00:01.911) 0:03:09.286 ******* 2026-01-13 01:11:18.671805 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.671809 | orchestrator | 2026-01-13 01:11:18.671812 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-01-13 01:11:18.671816 | orchestrator | Tuesday 13 January 2026 01:10:05 +0000 (0:00:02.219) 0:03:11.506 ******* 2026-01-13 01:11:18.671819 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.671823 | orchestrator | 2026-01-13 01:11:18.671826 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-01-13 01:11:18.671830 | orchestrator | Tuesday 13 January 2026 01:10:08 +0000 (0:00:02.550) 0:03:14.057 ******* 2026-01-13 01:11:18.671833 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.671837 | orchestrator | 2026-01-13 01:11:18.671840 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-13 01:11:18.671844 | orchestrator | Tuesday 13 January 2026 01:10:30 +0000 (0:00:22.676) 0:03:36.733 ******* 2026-01-13 01:11:18.671847 | orchestrator | 2026-01-13 01:11:18.671851 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-13 01:11:18.671854 | orchestrator | Tuesday 13 January 2026 01:10:30 +0000 (0:00:00.064) 0:03:36.797 ******* 2026-01-13 01:11:18.671857 | orchestrator | 2026-01-13 01:11:18.671861 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-13 01:11:18.671864 | orchestrator | Tuesday 13 January 2026 01:10:30 +0000 (0:00:00.068) 0:03:36.866 ******* 2026-01-13 01:11:18.671868 | orchestrator | 2026-01-13 01:11:18.671871 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-01-13 01:11:18.671875 | orchestrator | Tuesday 13 January 2026 01:10:30 +0000 (0:00:00.078) 0:03:36.944 ******* 2026-01-13 01:11:18.671878 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.671882 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:11:18.671885 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:11:18.671889 | orchestrator | 2026-01-13 01:11:18.671892 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-01-13 01:11:18.671896 | orchestrator | Tuesday 13 January 2026 01:10:46 +0000 (0:00:15.464) 0:03:52.409 ******* 2026-01-13 01:11:18.671899 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.671903 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:11:18.671906 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:11:18.671910 | orchestrator | 2026-01-13 01:11:18.671913 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-01-13 01:11:18.671917 | orchestrator | Tuesday 13 January 2026 01:10:52 +0000 (0:00:06.024) 0:03:58.433 ******* 2026-01-13 01:11:18.671921 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:11:18.671924 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.671928 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:11:18.671931 | orchestrator | 2026-01-13 01:11:18.671934 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-01-13 01:11:18.671938 | orchestrator | Tuesday 13 January 2026 01:11:02 +0000 (0:00:10.195) 0:04:08.629 ******* 2026-01-13 01:11:18.671941 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:11:18.671945 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:11:18.671948 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.671952 | orchestrator | 2026-01-13 01:11:18.671956 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-01-13 01:11:18.671963 | orchestrator | Tuesday 13 January 2026 01:11:10 +0000 (0:00:08.123) 0:04:16.752 ******* 2026-01-13 01:11:18.671981 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:11:18.671987 | orchestrator | changed: [testbed-node-2] 2026-01-13 01:11:18.671991 | orchestrator | changed: [testbed-node-1] 2026-01-13 01:11:18.671996 | orchestrator | 2026-01-13 01:11:18.672001 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:11:18.672007 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-13 01:11:18.672016 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-13 01:11:18.672023 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-13 01:11:18.672028 | orchestrator | 2026-01-13 01:11:18.672033 | orchestrator | 2026-01-13 01:11:18.672041 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:11:18.672046 | orchestrator | Tuesday 13 January 2026 01:11:16 +0000 (0:00:05.519) 0:04:22.272 ******* 2026-01-13 01:11:18.672054 | orchestrator | =============================================================================== 2026-01-13 01:11:18.672059 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.68s 2026-01-13 01:11:18.672065 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.31s 2026-01-13 01:11:18.672070 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.46s 2026-01-13 01:11:18.672075 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.92s 2026-01-13 01:11:18.672087 | orchestrator | octavia : Adding octavia related roles --------------------------------- 13.55s 2026-01-13 01:11:18.672093 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.20s 2026-01-13 01:11:18.672098 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.17s 2026-01-13 01:11:18.672104 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.12s 2026-01-13 01:11:18.672109 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 6.61s 2026-01-13 01:11:18.672115 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 6.09s 2026-01-13 01:11:18.672120 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.08s 2026-01-13 01:11:18.672125 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.02s 2026-01-13 01:11:18.672131 | orchestrator | octavia : Get security groups for octavia ------------------------------- 5.80s 2026-01-13 01:11:18.672136 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 5.79s 2026-01-13 01:11:18.672142 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.52s 2026-01-13 01:11:18.672147 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.18s 2026-01-13 01:11:18.672153 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.14s 2026-01-13 01:11:18.672158 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.06s 2026-01-13 01:11:18.672164 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 4.76s 2026-01-13 01:11:18.672169 | orchestrator | octavia : Copying over config.json files for services ------------------- 4.64s 2026-01-13 01:11:21.709874 | orchestrator | 2026-01-13 01:11:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:11:24.755473 | orchestrator | 2026-01-13 01:11:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:11:27.799805 | orchestrator | 2026-01-13 01:11:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:11:30.847301 | orchestrator | 2026-01-13 01:11:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:11:33.893957 | orchestrator | 2026-01-13 01:11:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:11:36.933290 | orchestrator | 2026-01-13 01:11:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:11:39.975470 | orchestrator | 2026-01-13 01:11:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:11:43.018050 | orchestrator | 2026-01-13 01:11:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:11:46.062003 | orchestrator | 2026-01-13 01:11:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:11:49.100805 | orchestrator | 2026-01-13 01:11:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:11:52.140737 | orchestrator | 2026-01-13 01:11:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:11:55.191121 | orchestrator | 2026-01-13 01:11:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:11:58.237783 | orchestrator | 2026-01-13 01:11:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:12:01.285975 | orchestrator | 2026-01-13 01:12:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:12:04.332713 | orchestrator | 2026-01-13 01:12:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:12:07.380604 | orchestrator | 2026-01-13 01:12:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:12:10.429601 | orchestrator | 2026-01-13 01:12:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:12:13.473058 | orchestrator | 2026-01-13 01:12:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:12:16.521032 | orchestrator | 2026-01-13 01:12:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-13 01:12:19.566950 | orchestrator | 2026-01-13 01:12:19.973753 | orchestrator | 2026-01-13 01:12:19.979703 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue Jan 13 01:12:19 UTC 2026 2026-01-13 01:12:19.979766 | orchestrator | 2026-01-13 01:12:20.356073 | orchestrator | ok: Runtime: 0:33:48.557797 2026-01-13 01:12:20.661122 | 2026-01-13 01:12:20.661284 | TASK [Bootstrap services] 2026-01-13 01:12:21.463401 | orchestrator | 2026-01-13 01:12:21.463531 | orchestrator | # BOOTSTRAP 2026-01-13 01:12:21.463546 | orchestrator | 2026-01-13 01:12:21.463555 | orchestrator | + set -e 2026-01-13 01:12:21.463563 | orchestrator | + echo 2026-01-13 01:12:21.463572 | orchestrator | + echo '# BOOTSTRAP' 2026-01-13 01:12:21.463582 | orchestrator | + echo 2026-01-13 01:12:21.463608 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-01-13 01:12:21.471826 | orchestrator | + set -e 2026-01-13 01:12:21.471881 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-01-13 01:12:25.863300 | orchestrator | 2026-01-13 01:12:25 | INFO  | It takes a moment until task 65a2231a-56ff-4016-a60d-b8b13fdafb9c (flavor-manager) has been started and output is visible here. 2026-01-13 01:12:32.844302 | orchestrator | 2026-01-13 01:12:28 | INFO  | Flavor SCS-1L-1 created 2026-01-13 01:12:32.844394 | orchestrator | 2026-01-13 01:12:29 | INFO  | Flavor SCS-1L-1-5 created 2026-01-13 01:12:32.844406 | orchestrator | 2026-01-13 01:12:29 | INFO  | Flavor SCS-1V-2 created 2026-01-13 01:12:32.844414 | orchestrator | 2026-01-13 01:12:29 | INFO  | Flavor SCS-1V-2-5 created 2026-01-13 01:12:32.844421 | orchestrator | 2026-01-13 01:12:29 | INFO  | Flavor SCS-1V-4 created 2026-01-13 01:12:32.844428 | orchestrator | 2026-01-13 01:12:30 | INFO  | Flavor SCS-1V-4-10 created 2026-01-13 01:12:32.844435 | orchestrator | 2026-01-13 01:12:30 | INFO  | Flavor SCS-1V-8 created 2026-01-13 01:12:32.844439 | orchestrator | 2026-01-13 01:12:30 | INFO  | Flavor SCS-1V-8-20 created 2026-01-13 01:12:32.844452 | orchestrator | 2026-01-13 01:12:30 | INFO  | Flavor SCS-2V-4 created 2026-01-13 01:12:32.844456 | orchestrator | 2026-01-13 01:12:30 | INFO  | Flavor SCS-2V-4-10 created 2026-01-13 01:12:32.844460 | orchestrator | 2026-01-13 01:12:30 | INFO  | Flavor SCS-2V-8 created 2026-01-13 01:12:32.844464 | orchestrator | 2026-01-13 01:12:30 | INFO  | Flavor SCS-2V-8-20 created 2026-01-13 01:12:32.844498 | orchestrator | 2026-01-13 01:12:30 | INFO  | Flavor SCS-2V-16 created 2026-01-13 01:12:32.844502 | orchestrator | 2026-01-13 01:12:30 | INFO  | Flavor SCS-2V-16-50 created 2026-01-13 01:12:32.844506 | orchestrator | 2026-01-13 01:12:31 | INFO  | Flavor SCS-4V-8 created 2026-01-13 01:12:32.844511 | orchestrator | 2026-01-13 01:12:31 | INFO  | Flavor SCS-4V-8-20 created 2026-01-13 01:12:32.844516 | orchestrator | 2026-01-13 01:12:31 | INFO  | Flavor SCS-4V-16 created 2026-01-13 01:12:32.844523 | orchestrator | 2026-01-13 01:12:31 | INFO  | Flavor SCS-4V-16-50 created 2026-01-13 01:12:32.844532 | orchestrator | 2026-01-13 01:12:31 | INFO  | Flavor SCS-4V-32 created 2026-01-13 01:12:32.844540 | orchestrator | 2026-01-13 01:12:31 | INFO  | Flavor SCS-4V-32-100 created 2026-01-13 01:12:32.844546 | orchestrator | 2026-01-13 01:12:31 | INFO  | Flavor SCS-8V-16 created 2026-01-13 01:12:32.844552 | orchestrator | 2026-01-13 01:12:31 | INFO  | Flavor SCS-8V-16-50 created 2026-01-13 01:12:32.844559 | orchestrator | 2026-01-13 01:12:31 | INFO  | Flavor SCS-8V-32 created 2026-01-13 01:12:32.844565 | orchestrator | 2026-01-13 01:12:32 | INFO  | Flavor SCS-8V-32-100 created 2026-01-13 01:12:32.844571 | orchestrator | 2026-01-13 01:12:32 | INFO  | Flavor SCS-16V-32 created 2026-01-13 01:12:32.844578 | orchestrator | 2026-01-13 01:12:32 | INFO  | Flavor SCS-16V-32-100 created 2026-01-13 01:12:32.844583 | orchestrator | 2026-01-13 01:12:32 | INFO  | Flavor SCS-2V-4-20s created 2026-01-13 01:12:32.844589 | orchestrator | 2026-01-13 01:12:32 | INFO  | Flavor SCS-4V-8-50s created 2026-01-13 01:12:32.844595 | orchestrator | 2026-01-13 01:12:32 | INFO  | Flavor SCS-8V-32-100s created 2026-01-13 01:12:35.108812 | orchestrator | 2026-01-13 01:12:35 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-01-13 01:12:45.216668 | orchestrator | 2026-01-13 01:12:45 | INFO  | Task c177d464-cf3b-4607-b28c-f60cc9756e07 (bootstrap-basic) was prepared for execution. 2026-01-13 01:12:45.216743 | orchestrator | 2026-01-13 01:12:45 | INFO  | It takes a moment until task c177d464-cf3b-4607-b28c-f60cc9756e07 (bootstrap-basic) has been started and output is visible here. 2026-01-13 01:13:31.044815 | orchestrator | 2026-01-13 01:13:31.044894 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-01-13 01:13:31.044909 | orchestrator | 2026-01-13 01:13:31.044917 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-13 01:13:31.044924 | orchestrator | Tuesday 13 January 2026 01:12:49 +0000 (0:00:00.065) 0:00:00.065 ******* 2026-01-13 01:13:31.044930 | orchestrator | ok: [localhost] 2026-01-13 01:13:31.044938 | orchestrator | 2026-01-13 01:13:31.044944 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-01-13 01:13:31.044951 | orchestrator | Tuesday 13 January 2026 01:12:51 +0000 (0:00:01.806) 0:00:01.872 ******* 2026-01-13 01:13:31.044956 | orchestrator | ok: [localhost] 2026-01-13 01:13:31.044959 | orchestrator | 2026-01-13 01:13:31.044963 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-01-13 01:13:31.044967 | orchestrator | Tuesday 13 January 2026 01:12:59 +0000 (0:00:08.537) 0:00:10.410 ******* 2026-01-13 01:13:31.044971 | orchestrator | changed: [localhost] 2026-01-13 01:13:31.044975 | orchestrator | 2026-01-13 01:13:31.044980 | orchestrator | TASK [Create public network] *************************************************** 2026-01-13 01:13:31.044984 | orchestrator | Tuesday 13 January 2026 01:13:07 +0000 (0:00:07.991) 0:00:18.402 ******* 2026-01-13 01:13:31.044988 | orchestrator | changed: [localhost] 2026-01-13 01:13:31.044992 | orchestrator | 2026-01-13 01:13:31.044996 | orchestrator | TASK [Set public network to default] ******************************************* 2026-01-13 01:13:31.044999 | orchestrator | Tuesday 13 January 2026 01:13:13 +0000 (0:00:05.944) 0:00:24.346 ******* 2026-01-13 01:13:31.045005 | orchestrator | changed: [localhost] 2026-01-13 01:13:31.045009 | orchestrator | 2026-01-13 01:13:31.045013 | orchestrator | TASK [Create public subnet] **************************************************** 2026-01-13 01:13:31.045017 | orchestrator | Tuesday 13 January 2026 01:13:19 +0000 (0:00:06.144) 0:00:30.490 ******* 2026-01-13 01:13:31.045021 | orchestrator | changed: [localhost] 2026-01-13 01:13:31.045025 | orchestrator | 2026-01-13 01:13:31.045028 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-01-13 01:13:31.045032 | orchestrator | Tuesday 13 January 2026 01:13:23 +0000 (0:00:04.015) 0:00:34.506 ******* 2026-01-13 01:13:31.045036 | orchestrator | changed: [localhost] 2026-01-13 01:13:31.045040 | orchestrator | 2026-01-13 01:13:31.045044 | orchestrator | TASK [Create manager role] ***************************************************** 2026-01-13 01:13:31.045057 | orchestrator | Tuesday 13 January 2026 01:13:27 +0000 (0:00:03.598) 0:00:38.105 ******* 2026-01-13 01:13:31.045067 | orchestrator | ok: [localhost] 2026-01-13 01:13:31.045073 | orchestrator | 2026-01-13 01:13:31.045079 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:13:31.045085 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:13:31.045091 | orchestrator | 2026-01-13 01:13:31.045097 | orchestrator | 2026-01-13 01:13:31.045104 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:13:31.045110 | orchestrator | Tuesday 13 January 2026 01:13:30 +0000 (0:00:03.376) 0:00:41.482 ******* 2026-01-13 01:13:31.045115 | orchestrator | =============================================================================== 2026-01-13 01:13:31.045122 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.54s 2026-01-13 01:13:31.045128 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.99s 2026-01-13 01:13:31.045134 | orchestrator | Set public network to default ------------------------------------------- 6.14s 2026-01-13 01:13:31.045141 | orchestrator | Create public network --------------------------------------------------- 5.94s 2026-01-13 01:13:31.045159 | orchestrator | Create public subnet ---------------------------------------------------- 4.02s 2026-01-13 01:13:31.045165 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.60s 2026-01-13 01:13:31.045171 | orchestrator | Create manager role ----------------------------------------------------- 3.38s 2026-01-13 01:13:31.045177 | orchestrator | Gathering Facts --------------------------------------------------------- 1.81s 2026-01-13 01:13:33.554864 | orchestrator | 2026-01-13 01:13:33 | INFO  | It takes a moment until task 53360f45-5587-4812-b53d-d9a1608bcdef (image-manager) has been started and output is visible here. 2026-01-13 01:14:12.940760 | orchestrator | 2026-01-13 01:13:36 | INFO  | Processing image 'Cirros 0.6.2' 2026-01-13 01:14:12.940812 | orchestrator | 2026-01-13 01:13:36 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-01-13 01:14:12.940817 | orchestrator | 2026-01-13 01:13:36 | INFO  | Importing image Cirros 0.6.2 2026-01-13 01:14:12.940821 | orchestrator | 2026-01-13 01:13:36 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-13 01:14:12.940825 | orchestrator | 2026-01-13 01:13:38 | INFO  | Waiting for image to leave queued state... 2026-01-13 01:14:12.940831 | orchestrator | 2026-01-13 01:13:40 | INFO  | Waiting for import to complete... 2026-01-13 01:14:12.940837 | orchestrator | 2026-01-13 01:13:50 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-01-13 01:14:12.940843 | orchestrator | 2026-01-13 01:13:50 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-01-13 01:14:12.940848 | orchestrator | 2026-01-13 01:13:50 | INFO  | Setting internal_version = 0.6.2 2026-01-13 01:14:12.940854 | orchestrator | 2026-01-13 01:13:50 | INFO  | Setting image_original_user = cirros 2026-01-13 01:14:12.940859 | orchestrator | 2026-01-13 01:13:50 | INFO  | Adding tag os:cirros 2026-01-13 01:14:12.940865 | orchestrator | 2026-01-13 01:13:50 | INFO  | Setting property architecture: x86_64 2026-01-13 01:14:12.940870 | orchestrator | 2026-01-13 01:13:51 | INFO  | Setting property hw_disk_bus: scsi 2026-01-13 01:14:12.940875 | orchestrator | 2026-01-13 01:13:51 | INFO  | Setting property hw_rng_model: virtio 2026-01-13 01:14:12.940881 | orchestrator | 2026-01-13 01:13:51 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-13 01:14:12.940886 | orchestrator | 2026-01-13 01:13:51 | INFO  | Setting property hw_watchdog_action: reset 2026-01-13 01:14:12.940892 | orchestrator | 2026-01-13 01:13:51 | INFO  | Setting property hypervisor_type: qemu 2026-01-13 01:14:12.940895 | orchestrator | 2026-01-13 01:13:51 | INFO  | Setting property os_distro: cirros 2026-01-13 01:14:12.940898 | orchestrator | 2026-01-13 01:13:52 | INFO  | Setting property os_purpose: minimal 2026-01-13 01:14:12.940902 | orchestrator | 2026-01-13 01:13:52 | INFO  | Setting property replace_frequency: never 2026-01-13 01:14:12.940905 | orchestrator | 2026-01-13 01:13:52 | INFO  | Setting property uuid_validity: none 2026-01-13 01:14:12.940908 | orchestrator | 2026-01-13 01:13:52 | INFO  | Setting property provided_until: none 2026-01-13 01:14:12.940911 | orchestrator | 2026-01-13 01:13:52 | INFO  | Setting property image_description: Cirros 2026-01-13 01:14:12.940914 | orchestrator | 2026-01-13 01:13:53 | INFO  | Setting property image_name: Cirros 2026-01-13 01:14:12.940917 | orchestrator | 2026-01-13 01:13:53 | INFO  | Setting property internal_version: 0.6.2 2026-01-13 01:14:12.940920 | orchestrator | 2026-01-13 01:13:53 | INFO  | Setting property image_original_user: cirros 2026-01-13 01:14:12.940933 | orchestrator | 2026-01-13 01:13:53 | INFO  | Setting property os_version: 0.6.2 2026-01-13 01:14:12.940942 | orchestrator | 2026-01-13 01:13:53 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-13 01:14:12.940945 | orchestrator | 2026-01-13 01:13:54 | INFO  | Setting property image_build_date: 2023-05-30 2026-01-13 01:14:12.940949 | orchestrator | 2026-01-13 01:13:54 | INFO  | Checking status of 'Cirros 0.6.2' 2026-01-13 01:14:12.940952 | orchestrator | 2026-01-13 01:13:54 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-01-13 01:14:12.940955 | orchestrator | 2026-01-13 01:13:54 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-01-13 01:14:12.940958 | orchestrator | 2026-01-13 01:13:54 | INFO  | Processing image 'Cirros 0.6.3' 2026-01-13 01:14:12.940963 | orchestrator | 2026-01-13 01:13:54 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-01-13 01:14:12.940971 | orchestrator | 2026-01-13 01:13:54 | INFO  | Importing image Cirros 0.6.3 2026-01-13 01:14:12.940974 | orchestrator | 2026-01-13 01:13:54 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-13 01:14:12.940977 | orchestrator | 2026-01-13 01:13:56 | INFO  | Waiting for image to leave queued state... 2026-01-13 01:14:12.940980 | orchestrator | 2026-01-13 01:13:58 | INFO  | Waiting for import to complete... 2026-01-13 01:14:12.940991 | orchestrator | 2026-01-13 01:14:09 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-01-13 01:14:12.940995 | orchestrator | 2026-01-13 01:14:09 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-01-13 01:14:12.940999 | orchestrator | 2026-01-13 01:14:09 | INFO  | Setting internal_version = 0.6.3 2026-01-13 01:14:12.941004 | orchestrator | 2026-01-13 01:14:09 | INFO  | Setting image_original_user = cirros 2026-01-13 01:14:12.941012 | orchestrator | 2026-01-13 01:14:09 | INFO  | Adding tag os:cirros 2026-01-13 01:14:12.941018 | orchestrator | 2026-01-13 01:14:09 | INFO  | Setting property architecture: x86_64 2026-01-13 01:14:12.941022 | orchestrator | 2026-01-13 01:14:09 | INFO  | Setting property hw_disk_bus: scsi 2026-01-13 01:14:12.941027 | orchestrator | 2026-01-13 01:14:10 | INFO  | Setting property hw_rng_model: virtio 2026-01-13 01:14:12.941032 | orchestrator | 2026-01-13 01:14:10 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-13 01:14:12.941038 | orchestrator | 2026-01-13 01:14:10 | INFO  | Setting property hw_watchdog_action: reset 2026-01-13 01:14:12.941043 | orchestrator | 2026-01-13 01:14:10 | INFO  | Setting property hypervisor_type: qemu 2026-01-13 01:14:12.941049 | orchestrator | 2026-01-13 01:14:10 | INFO  | Setting property os_distro: cirros 2026-01-13 01:14:12.941052 | orchestrator | 2026-01-13 01:14:10 | INFO  | Setting property os_purpose: minimal 2026-01-13 01:14:12.941055 | orchestrator | 2026-01-13 01:14:10 | INFO  | Setting property replace_frequency: never 2026-01-13 01:14:12.941058 | orchestrator | 2026-01-13 01:14:11 | INFO  | Setting property uuid_validity: none 2026-01-13 01:14:12.941062 | orchestrator | 2026-01-13 01:14:11 | INFO  | Setting property provided_until: none 2026-01-13 01:14:12.941065 | orchestrator | 2026-01-13 01:14:11 | INFO  | Setting property image_description: Cirros 2026-01-13 01:14:12.941068 | orchestrator | 2026-01-13 01:14:11 | INFO  | Setting property image_name: Cirros 2026-01-13 01:14:12.941071 | orchestrator | 2026-01-13 01:14:11 | INFO  | Setting property internal_version: 0.6.3 2026-01-13 01:14:12.941077 | orchestrator | 2026-01-13 01:14:11 | INFO  | Setting property image_original_user: cirros 2026-01-13 01:14:12.941080 | orchestrator | 2026-01-13 01:14:11 | INFO  | Setting property os_version: 0.6.3 2026-01-13 01:14:12.941083 | orchestrator | 2026-01-13 01:14:12 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-13 01:14:12.941087 | orchestrator | 2026-01-13 01:14:12 | INFO  | Setting property image_build_date: 2024-09-26 2026-01-13 01:14:12.941090 | orchestrator | 2026-01-13 01:14:12 | INFO  | Checking status of 'Cirros 0.6.3' 2026-01-13 01:14:12.941093 | orchestrator | 2026-01-13 01:14:12 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-01-13 01:14:12.941096 | orchestrator | 2026-01-13 01:14:12 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-01-13 01:14:13.345388 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-01-13 01:14:15.718589 | orchestrator | 2026-01-13 01:14:15 | INFO  | date: 2026-01-12 2026-01-13 01:14:15.719099 | orchestrator | 2026-01-13 01:14:15 | INFO  | image: octavia-amphora-haproxy-2024.2.20260112.qcow2 2026-01-13 01:14:15.719764 | orchestrator | 2026-01-13 01:14:15 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260112.qcow2 2026-01-13 01:14:15.719919 | orchestrator | 2026-01-13 01:14:15 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260112.qcow2.CHECKSUM 2026-01-13 01:14:15.854002 | orchestrator | 2026-01-13 01:14:15 | INFO  | checksum: 97638f6d53f99618f0ea9da8ee2e38e3567970c3d42f572e458295ca1456bb27 2026-01-13 01:14:15.932366 | orchestrator | 2026-01-13 01:14:15 | INFO  | It takes a moment until task 87998ab9-56ee-40f7-9801-b62efada077a (image-manager) has been started and output is visible here. 2026-01-13 01:15:26.631780 | orchestrator | 2026-01-13 01:14:17 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-01-12' 2026-01-13 01:15:26.631862 | orchestrator | 2026-01-13 01:14:17 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260112.qcow2: 200 2026-01-13 01:15:26.631870 | orchestrator | 2026-01-13 01:14:17 | INFO  | Importing image OpenStack Octavia Amphora 2026-01-12 2026-01-13 01:15:26.631875 | orchestrator | 2026-01-13 01:14:17 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260112.qcow2 2026-01-13 01:15:26.631881 | orchestrator | 2026-01-13 01:14:19 | INFO  | Waiting for image to leave queued state... 2026-01-13 01:15:26.631885 | orchestrator | 2026-01-13 01:14:21 | INFO  | Waiting for import to complete... 2026-01-13 01:15:26.631890 | orchestrator | 2026-01-13 01:14:31 | INFO  | Waiting for import to complete... 2026-01-13 01:15:26.631894 | orchestrator | 2026-01-13 01:14:41 | INFO  | Waiting for import to complete... 2026-01-13 01:15:26.631897 | orchestrator | 2026-01-13 01:14:51 | INFO  | Waiting for import to complete... 2026-01-13 01:15:26.631903 | orchestrator | 2026-01-13 01:15:01 | INFO  | Waiting for import to complete... 2026-01-13 01:15:26.631907 | orchestrator | 2026-01-13 01:15:11 | INFO  | Waiting for import to complete... 2026-01-13 01:15:26.631911 | orchestrator | 2026-01-13 01:15:21 | INFO  | Import of 'OpenStack Octavia Amphora 2026-01-12' successfully completed, reloading images 2026-01-13 01:15:26.631916 | orchestrator | 2026-01-13 01:15:21 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-01-12' 2026-01-13 01:15:26.631935 | orchestrator | 2026-01-13 01:15:21 | INFO  | Setting internal_version = 2026-01-12 2026-01-13 01:15:26.631939 | orchestrator | 2026-01-13 01:15:21 | INFO  | Setting image_original_user = ubuntu 2026-01-13 01:15:26.631943 | orchestrator | 2026-01-13 01:15:21 | INFO  | Adding tag amphora 2026-01-13 01:15:26.631948 | orchestrator | 2026-01-13 01:15:22 | INFO  | Adding tag os:ubuntu 2026-01-13 01:15:26.631952 | orchestrator | 2026-01-13 01:15:22 | INFO  | Setting property architecture: x86_64 2026-01-13 01:15:26.631956 | orchestrator | 2026-01-13 01:15:22 | INFO  | Setting property hw_disk_bus: scsi 2026-01-13 01:15:26.631959 | orchestrator | 2026-01-13 01:15:22 | INFO  | Setting property hw_rng_model: virtio 2026-01-13 01:15:26.631963 | orchestrator | 2026-01-13 01:15:22 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-13 01:15:26.631967 | orchestrator | 2026-01-13 01:15:22 | INFO  | Setting property hw_watchdog_action: reset 2026-01-13 01:15:26.631971 | orchestrator | 2026-01-13 01:15:23 | INFO  | Setting property hypervisor_type: qemu 2026-01-13 01:15:26.631975 | orchestrator | 2026-01-13 01:15:23 | INFO  | Setting property os_distro: ubuntu 2026-01-13 01:15:26.631979 | orchestrator | 2026-01-13 01:15:23 | INFO  | Setting property replace_frequency: quarterly 2026-01-13 01:15:26.631982 | orchestrator | 2026-01-13 01:15:23 | INFO  | Setting property uuid_validity: last-1 2026-01-13 01:15:26.631986 | orchestrator | 2026-01-13 01:15:24 | INFO  | Setting property provided_until: none 2026-01-13 01:15:26.631990 | orchestrator | 2026-01-13 01:15:24 | INFO  | Setting property os_purpose: network 2026-01-13 01:15:26.632005 | orchestrator | 2026-01-13 01:15:24 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-01-13 01:15:26.632009 | orchestrator | 2026-01-13 01:15:24 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-01-13 01:15:26.632013 | orchestrator | 2026-01-13 01:15:24 | INFO  | Setting property internal_version: 2026-01-12 2026-01-13 01:15:26.632017 | orchestrator | 2026-01-13 01:15:25 | INFO  | Setting property image_original_user: ubuntu 2026-01-13 01:15:26.632021 | orchestrator | 2026-01-13 01:15:25 | INFO  | Setting property os_version: 2026-01-12 2026-01-13 01:15:26.632025 | orchestrator | 2026-01-13 01:15:25 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260112.qcow2 2026-01-13 01:15:26.632029 | orchestrator | 2026-01-13 01:15:25 | INFO  | Setting property image_build_date: 2026-01-12 2026-01-13 01:15:26.632033 | orchestrator | 2026-01-13 01:15:26 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-01-12' 2026-01-13 01:15:26.632036 | orchestrator | 2026-01-13 01:15:26 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-01-12' 2026-01-13 01:15:26.632051 | orchestrator | 2026-01-13 01:15:26 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-01-13 01:15:26.632055 | orchestrator | 2026-01-13 01:15:26 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-01-13 01:15:26.632060 | orchestrator | 2026-01-13 01:15:26 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-01-13 01:15:26.632063 | orchestrator | 2026-01-13 01:15:26 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-01-13 01:15:27.285692 | orchestrator | ok: Runtime: 0:03:06.008900 2026-01-13 01:15:27.308751 | 2026-01-13 01:15:27.308892 | TASK [Run checks] 2026-01-13 01:15:28.004854 | orchestrator | + set -e 2026-01-13 01:15:28.004992 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-13 01:15:28.005009 | orchestrator | ++ export INTERACTIVE=false 2026-01-13 01:15:28.005023 | orchestrator | ++ INTERACTIVE=false 2026-01-13 01:15:28.005031 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-13 01:15:28.005039 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-13 01:15:28.005047 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-13 01:15:28.006235 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-13 01:15:28.009075 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-13 01:15:28.009140 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-13 01:15:28.009383 | orchestrator | + echo 2026-01-13 01:15:28.009444 | orchestrator | 2026-01-13 01:15:28.010036 | orchestrator | # CHECK 2026-01-13 01:15:28.010053 | orchestrator | 2026-01-13 01:15:28.010071 | orchestrator | + echo '# CHECK' 2026-01-13 01:15:28.010079 | orchestrator | + echo 2026-01-13 01:15:28.010931 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-13 01:15:28.010998 | orchestrator | ++ semver latest 5.0.0 2026-01-13 01:15:28.070597 | orchestrator | 2026-01-13 01:15:28.070653 | orchestrator | ## Containers @ testbed-manager 2026-01-13 01:15:28.070664 | orchestrator | 2026-01-13 01:15:28.070673 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-13 01:15:28.070681 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-13 01:15:28.070688 | orchestrator | + echo 2026-01-13 01:15:28.070695 | orchestrator | + echo '## Containers @ testbed-manager' 2026-01-13 01:15:28.070702 | orchestrator | + echo 2026-01-13 01:15:28.070709 | orchestrator | + osism container testbed-manager ps 2026-01-13 01:15:30.073356 | orchestrator | 2026-01-13 01:15:30 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-01-13 01:15:30.417113 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-13 01:15:30.417182 | orchestrator | beaafb719ec6 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_blackbox_exporter 2026-01-13 01:15:30.417195 | orchestrator | 1b4b453e1364 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_alertmanager 2026-01-13 01:15:30.417210 | orchestrator | d55060940741 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2026-01-13 01:15:30.417226 | orchestrator | 76f1ee0cf662 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-01-13 01:15:30.417245 | orchestrator | 4d3759f39f85 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_server 2026-01-13 01:15:30.417253 | orchestrator | 507f77bc0f9b registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 16 minutes cephclient 2026-01-13 01:15:30.417260 | orchestrator | ae1d9bd5208e registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2026-01-13 01:15:30.417267 | orchestrator | 607b2adb2324 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2026-01-13 01:15:30.417289 | orchestrator | b2a22f1a1da2 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2026-01-13 01:15:30.417296 | orchestrator | 9521b89830b6 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 30 minutes ago Up 30 minutes (healthy) 80/tcp phpmyadmin 2026-01-13 01:15:30.417303 | orchestrator | a6fdd77eefc8 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 31 minutes ago Up 30 minutes openstackclient 2026-01-13 01:15:30.417310 | orchestrator | d7d4e2e86381 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 31 minutes ago Up 30 minutes (healthy) 8080/tcp homer 2026-01-13 01:15:30.417317 | orchestrator | 76f921fdade5 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 54 minutes ago Up 54 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-01-13 01:15:30.417323 | orchestrator | dbd6bf8f01ad registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 58 minutes ago Up 37 minutes (healthy) manager-inventory_reconciler-1 2026-01-13 01:15:30.417330 | orchestrator | b689e6460e5a registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) osism-ansible 2026-01-13 01:15:30.417364 | orchestrator | ea10c30f1a66 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) osism-kubernetes 2026-01-13 01:15:30.417374 | orchestrator | 1e7366c9a3e3 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) ceph-ansible 2026-01-13 01:15:30.417383 | orchestrator | e0b054395a95 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) kolla-ansible 2026-01-13 01:15:30.417389 | orchestrator | a443ed407cc2 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 58 minutes ago Up 38 minutes (healthy) 8000/tcp manager-ara-server-1 2026-01-13 01:15:30.417395 | orchestrator | 2e264c9269d9 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 58 minutes ago Up 38 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-01-13 01:15:30.417401 | orchestrator | 3633f57a7562 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 38 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-01-13 01:15:30.417408 | orchestrator | eae986e0f082 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 58 minutes ago Up 38 minutes (healthy) osismclient 2026-01-13 01:15:30.417414 | orchestrator | 6220425b5e3d registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 38 minutes (healthy) manager-beat-1 2026-01-13 01:15:30.417425 | orchestrator | 9bd9a5a8fc79 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 58 minutes ago Up 38 minutes (healthy) 6379/tcp manager-redis-1 2026-01-13 01:15:30.417432 | orchestrator | 28b9850cbc51 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 38 minutes (healthy) manager-openstack-1 2026-01-13 01:15:30.417439 | orchestrator | 67becce7ffcc registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 38 minutes (healthy) manager-listener-1 2026-01-13 01:15:30.417445 | orchestrator | 11f949886b70 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 58 minutes ago Up 38 minutes (healthy) 3306/tcp manager-mariadb-1 2026-01-13 01:15:30.417452 | orchestrator | b59d339ad122 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 38 minutes (healthy) manager-flower-1 2026-01-13 01:15:30.417458 | orchestrator | ae4ec133e15a registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-01-13 01:15:30.742046 | orchestrator | 2026-01-13 01:15:30.742113 | orchestrator | ## Images @ testbed-manager 2026-01-13 01:15:30.742127 | orchestrator | 2026-01-13 01:15:30.742137 | orchestrator | + echo 2026-01-13 01:15:30.742146 | orchestrator | + echo '## Images @ testbed-manager' 2026-01-13 01:15:30.742155 | orchestrator | + echo 2026-01-13 01:15:30.742168 | orchestrator | + osism container testbed-manager images 2026-01-13 01:15:33.373914 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-13 01:15:33.373989 | orchestrator | registry.osism.tech/osism/osism-ansible latest c94902cef8b2 About an hour ago 611MB 2026-01-13 01:15:33.373997 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 035febbec78c About an hour ago 608MB 2026-01-13 01:15:33.374002 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 5821b8692edb About an hour ago 560MB 2026-01-13 01:15:33.374007 | orchestrator | registry.osism.tech/osism/osism latest e6bbfeb96ae9 About an hour ago 384MB 2026-01-13 01:15:33.374012 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest ddf1aa7937f6 About an hour ago 1.23GB 2026-01-13 01:15:33.374037 | orchestrator | registry.osism.tech/osism/osism-frontend latest 4df35c5f873f About an hour ago 239MB 2026-01-13 01:15:33.374042 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 2cef1bd78903 About an hour ago 335MB 2026-01-13 01:15:33.374047 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 1f7c62886d13 17 hours ago 238MB 2026-01-13 01:15:33.374051 | orchestrator | registry.osism.tech/osism/cephclient reef 762d7b884358 22 hours ago 454MB 2026-01-13 01:15:33.374057 | orchestrator | registry.osism.tech/kolla/cron 2024.2 18397a23bf8e 23 hours ago 271MB 2026-01-13 01:15:33.374062 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 41bede1e920d 23 hours ago 675MB 2026-01-13 01:15:33.374067 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 7623cc118b47 23 hours ago 585MB 2026-01-13 01:15:33.374071 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 61c1e581ebcf 23 hours ago 313MB 2026-01-13 01:15:33.374087 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 a99e23a01ef0 23 hours ago 311MB 2026-01-13 01:15:33.374092 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 6acc9df66227 23 hours ago 844MB 2026-01-13 01:15:33.374097 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 9b46a2a48036 23 hours ago 409MB 2026-01-13 01:15:33.374102 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 aac9a35ae18a 23 hours ago 363MB 2026-01-13 01:15:33.374107 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 5 weeks ago 11.5MB 2026-01-13 01:15:33.374112 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 2 months ago 334MB 2026-01-13 01:15:33.374117 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine 13105d2858de 2 months ago 41.4MB 2026-01-13 01:15:33.374122 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 3 months ago 742MB 2026-01-13 01:15:33.374127 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 4 months ago 275MB 2026-01-13 01:15:33.374131 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 5 months ago 226MB 2026-01-13 01:15:33.374136 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 19 months ago 146MB 2026-01-13 01:15:33.846544 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-13 01:15:33.846753 | orchestrator | ++ semver latest 5.0.0 2026-01-13 01:15:33.898896 | orchestrator | 2026-01-13 01:15:33.898949 | orchestrator | ## Containers @ testbed-node-0 2026-01-13 01:15:33.898958 | orchestrator | 2026-01-13 01:15:33.898964 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-13 01:15:33.898971 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-13 01:15:33.898978 | orchestrator | + echo 2026-01-13 01:15:33.898985 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-01-13 01:15:33.898993 | orchestrator | + echo 2026-01-13 01:15:33.899000 | orchestrator | + osism container testbed-node-0 ps 2026-01-13 01:15:36.371672 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-13 01:15:36.371735 | orchestrator | ea188795c6c1 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-13 01:15:36.371744 | orchestrator | 2380e2574883 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-13 01:15:36.371752 | orchestrator | 3ae4641f6e4e registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-01-13 01:15:36.371770 | orchestrator | 070dbd4cc6dd registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-01-13 01:15:36.371778 | orchestrator | 94284355a770 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-13 01:15:36.371784 | orchestrator | 8d4471753615 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-01-13 01:15:36.371791 | orchestrator | 462cbafa5532 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-01-13 01:15:36.371798 | orchestrator | 7c54a847b027 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2026-01-13 01:15:36.371818 | orchestrator | d48ceb80e386 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-13 01:15:36.371825 | orchestrator | d6305862ef7c registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-01-13 01:15:36.371832 | orchestrator | 62f14fda8460 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_backup 2026-01-13 01:15:36.371839 | orchestrator | 239f51dbfcd3 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_volume 2026-01-13 01:15:36.371846 | orchestrator | ed57ad60b5e1 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2026-01-13 01:15:36.371852 | orchestrator | a0c26107ea02 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2026-01-13 01:15:36.371859 | orchestrator | c62c68d72bdd registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-01-13 01:15:36.371866 | orchestrator | 04766d8c4ed4 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2026-01-13 01:15:36.371873 | orchestrator | 220256b5ba6e registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2026-01-13 01:15:36.371880 | orchestrator | 657a8dd50a44 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2026-01-13 01:15:36.371886 | orchestrator | dc26c4b469d8 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2026-01-13 01:15:36.371893 | orchestrator | 741d2a945ecf registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-01-13 01:15:36.371899 | orchestrator | d034b962119d registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2026-01-13 01:15:36.371917 | orchestrator | 61a3551be619 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-01-13 01:15:36.371927 | orchestrator | b45c0ccd8ffa registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2026-01-13 01:15:36.371934 | orchestrator | 8ec28903b5ee registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2026-01-13 01:15:36.371940 | orchestrator | db903281ba06 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2026-01-13 01:15:36.371950 | orchestrator | ba58d1f35e07 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2026-01-13 01:15:36.371956 | orchestrator | df783840ea8c registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_central 2026-01-13 01:15:36.371963 | orchestrator | ff822d10bde2 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-01-13 01:15:36.371974 | orchestrator | 4af526a74eb1 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_api 2026-01-13 01:15:36.371980 | orchestrator | 9028ed0703eb registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2026-01-13 01:15:36.371987 | orchestrator | d0f93aa70de9 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2026-01-13 01:15:36.371994 | orchestrator | b011bf913b67 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2026-01-13 01:15:36.372000 | orchestrator | b84cfe27a80b registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2026-01-13 01:15:36.372007 | orchestrator | 1b82559e5a07 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2026-01-13 01:15:36.372014 | orchestrator | b17f16b43353 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-01-13 01:15:36.372020 | orchestrator | 74106a3d1992 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-01-13 01:15:36.372027 | orchestrator | 5bc9165738c3 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-01-13 01:15:36.372034 | orchestrator | e65e4adc51c6 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-01-13 01:15:36.372040 | orchestrator | 622071ae0e63 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-01-13 01:15:36.372047 | orchestrator | e58ff6e6c1bf registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2026-01-13 01:15:36.372053 | orchestrator | a3d338fc1e63 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2026-01-13 01:15:36.372060 | orchestrator | 77dcf89d1c62 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-01-13 01:15:36.372067 | orchestrator | d06c84aed363 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2026-01-13 01:15:36.372073 | orchestrator | e3a535de601e registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2026-01-13 01:15:36.372084 | orchestrator | 98cb6254ccb9 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2026-01-13 01:15:36.372090 | orchestrator | b0f455a2a0fd registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2026-01-13 01:15:36.372100 | orchestrator | f056a300284a registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2026-01-13 01:15:36.372110 | orchestrator | 03ee67e9ef7c registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2026-01-13 01:15:36.372117 | orchestrator | 000f2103ee99 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2026-01-13 01:15:36.372123 | orchestrator | 5583095e6fa6 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2026-01-13 01:15:36.372130 | orchestrator | 8b7e7c971ce6 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2026-01-13 01:15:36.372136 | orchestrator | e3997df8706a registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2026-01-13 01:15:36.372143 | orchestrator | 1f42c21a1465 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2026-01-13 01:15:36.372149 | orchestrator | 25af468da4ed registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2026-01-13 01:15:36.372156 | orchestrator | 0c9311ab9a35 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2026-01-13 01:15:36.372163 | orchestrator | cb2d99d448b9 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2026-01-13 01:15:36.372169 | orchestrator | 39b51e8cb6bd registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2026-01-13 01:15:36.372176 | orchestrator | bc1c26491d65 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2026-01-13 01:15:36.372183 | orchestrator | 35ade5107c7a registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2026-01-13 01:15:36.752841 | orchestrator | 2026-01-13 01:15:36.752892 | orchestrator | ## Images @ testbed-node-0 2026-01-13 01:15:36.752898 | orchestrator | 2026-01-13 01:15:36.752903 | orchestrator | + echo 2026-01-13 01:15:36.752907 | orchestrator | + echo '## Images @ testbed-node-0' 2026-01-13 01:15:36.752912 | orchestrator | + echo 2026-01-13 01:15:36.752916 | orchestrator | + osism container testbed-node-0 images 2026-01-13 01:15:39.178037 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-13 01:15:39.178094 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 8fb293e8afdf 22 hours ago 1.27GB 2026-01-13 01:15:39.178109 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 abdd8fb3d2fa 23 hours ago 1.02GB 2026-01-13 01:15:39.178113 | orchestrator | registry.osism.tech/kolla/cron 2024.2 18397a23bf8e 23 hours ago 271MB 2026-01-13 01:15:39.178117 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 df276d2b69ca 23 hours ago 282MB 2026-01-13 01:15:39.178121 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 ac6c32a007ae 23 hours ago 272MB 2026-01-13 01:15:39.178125 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 c93da708442d 23 hours ago 279MB 2026-01-13 01:15:39.178129 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 57c3a6a21e67 23 hours ago 417MB 2026-01-13 01:15:39.178133 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 b58edbb0aebf 23 hours ago 1.56GB 2026-01-13 01:15:39.178137 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f47e7f5ea7ea 23 hours ago 1.53GB 2026-01-13 01:15:39.178149 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 701b7d6baaed 23 hours ago 328MB 2026-01-13 01:15:39.178153 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 41bede1e920d 23 hours ago 675MB 2026-01-13 01:15:39.178157 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 7623cc118b47 23 hours ago 585MB 2026-01-13 01:15:39.178161 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 51338ce4d7d7 23 hours ago 1.16GB 2026-01-13 01:15:39.178164 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 d62c246bdd51 23 hours ago 284MB 2026-01-13 01:15:39.178168 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 3802479789de 23 hours ago 284MB 2026-01-13 01:15:39.178172 | orchestrator | registry.osism.tech/kolla/redis 2024.2 8be0abeada49 23 hours ago 278MB 2026-01-13 01:15:39.178176 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 3659f0b3baf8 23 hours ago 278MB 2026-01-13 01:15:39.178179 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 296069031ebe 23 hours ago 297MB 2026-01-13 01:15:39.178183 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 39f19aba075b 23 hours ago 304MB 2026-01-13 01:15:39.178187 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 a99e23a01ef0 23 hours ago 311MB 2026-01-13 01:15:39.178192 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 18d09bc40715 23 hours ago 306MB 2026-01-13 01:15:39.178199 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 aac9a35ae18a 23 hours ago 363MB 2026-01-13 01:15:39.178208 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 51f8ffe4ca6e 23 hours ago 458MB 2026-01-13 01:15:39.178216 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 60fd465cc253 23 hours ago 1.09GB 2026-01-13 01:15:39.178222 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 4528c8236356 23 hours ago 1.04GB 2026-01-13 01:15:39.178243 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 e06e3ea5a6dd 23 hours ago 1.05GB 2026-01-13 01:15:39.178249 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 f3982436dcf0 23 hours ago 1.41GB 2026-01-13 01:15:39.178255 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 53fbf5812506 23 hours ago 1.42GB 2026-01-13 01:15:39.178260 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 68c231276e30 23 hours ago 1.72GB 2026-01-13 01:15:39.178266 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 bf190a1bf98f 23 hours ago 1.41GB 2026-01-13 01:15:39.178271 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 a59addf52e55 23 hours ago 1.22GB 2026-01-13 01:15:39.178277 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 814d90987a6f 23 hours ago 1.22GB 2026-01-13 01:15:39.178282 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 5ad6b55b48f5 23 hours ago 1.22GB 2026-01-13 01:15:39.178289 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 4c252f4aac36 23 hours ago 1.37GB 2026-01-13 01:15:39.178300 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 418791135695 23 hours ago 981MB 2026-01-13 01:15:39.178306 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 272846600dc6 23 hours ago 1.03GB 2026-01-13 01:15:39.178326 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1283503f2d5d 23 hours ago 1.03GB 2026-01-13 01:15:39.178333 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 69111ffbeefe 23 hours ago 1.03GB 2026-01-13 01:15:39.178340 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 9c00e76773d5 23 hours ago 1.06GB 2026-01-13 01:15:39.178352 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 ce8f95c27da6 23 hours ago 1.06GB 2026-01-13 01:15:39.178358 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 4d2bc901d4e5 23 hours ago 1.13GB 2026-01-13 01:15:39.178361 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 d96c5aaa22bc 23 hours ago 1.25GB 2026-01-13 01:15:39.178368 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 35d6a1495d78 23 hours ago 1.17GB 2026-01-13 01:15:39.178374 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 7bcfdc7255dd 23 hours ago 994MB 2026-01-13 01:15:39.178380 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 c916d4869f2e 23 hours ago 989MB 2026-01-13 01:15:39.178386 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 2de34a8fe642 23 hours ago 990MB 2026-01-13 01:15:39.178392 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 bdd612831e36 23 hours ago 990MB 2026-01-13 01:15:39.178397 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 2e6c856929d9 23 hours ago 990MB 2026-01-13 01:15:39.178404 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 89d7df329e57 23 hours ago 994MB 2026-01-13 01:15:39.178409 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 56e7f91c3fab 23 hours ago 981MB 2026-01-13 01:15:39.178415 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 e2d1fc7af8d7 23 hours ago 982MB 2026-01-13 01:15:39.178421 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 dbc6b3f69c5d 23 hours ago 997MB 2026-01-13 01:15:39.178428 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9a0da1090b9d 23 hours ago 996MB 2026-01-13 01:15:39.178433 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 e83ecb5b3f20 23 hours ago 997MB 2026-01-13 01:15:39.178439 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 b64cd2c2f9fc 23 hours ago 1.1GB 2026-01-13 01:15:39.178445 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 1245eead9e1b 23 hours ago 995MB 2026-01-13 01:15:39.178451 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 02f0b8255576 23 hours ago 1.05GB 2026-01-13 01:15:39.178458 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 a43a2e0ff586 23 hours ago 980MB 2026-01-13 01:15:39.178468 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 464d439ccf30 23 hours ago 980MB 2026-01-13 01:15:39.178474 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 60e1bd484c4b 23 hours ago 980MB 2026-01-13 01:15:39.178481 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 e620ea9760b8 23 hours ago 979MB 2026-01-13 01:15:39.178487 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 6b3dfa3e1db7 23 hours ago 846MB 2026-01-13 01:15:39.178494 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 954866669b8c 23 hours ago 846MB 2026-01-13 01:15:39.178501 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 8724a678703f 23 hours ago 846MB 2026-01-13 01:15:39.178506 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 2e1bcc6f62cb 23 hours ago 846MB 2026-01-13 01:15:39.480255 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-13 01:15:39.481172 | orchestrator | ++ semver latest 5.0.0 2026-01-13 01:15:39.550172 | orchestrator | 2026-01-13 01:15:39.550246 | orchestrator | ## Containers @ testbed-node-1 2026-01-13 01:15:39.550256 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-13 01:15:39.550262 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-13 01:15:39.550267 | orchestrator | + echo 2026-01-13 01:15:39.550273 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-01-13 01:15:39.550297 | orchestrator | 2026-01-13 01:15:39.550303 | orchestrator | + echo 2026-01-13 01:15:39.550308 | orchestrator | + osism container testbed-node-1 ps 2026-01-13 01:15:42.032054 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-13 01:15:42.032103 | orchestrator | 6d070a01e0ba registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-13 01:15:42.032109 | orchestrator | ef7abf888115 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-13 01:15:42.032114 | orchestrator | e19ed0666c5a registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-01-13 01:15:42.032118 | orchestrator | 1f78bbd2edd7 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-01-13 01:15:42.032122 | orchestrator | ceaf14fb5ff3 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_api 2026-01-13 01:15:42.032126 | orchestrator | 499ed120bece registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-01-13 01:15:42.032130 | orchestrator | b25e7cdd2871 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-01-13 01:15:42.032133 | orchestrator | 7a0dc4bf0d1a registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-01-13 01:15:42.032139 | orchestrator | 355e4bd73e99 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2026-01-13 01:15:42.032143 | orchestrator | a36617691ec6 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-13 01:15:42.032147 | orchestrator | 2174ff3d2519 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_backup 2026-01-13 01:15:42.032151 | orchestrator | 5483ab849afc registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_volume 2026-01-13 01:15:42.032154 | orchestrator | 6743e3764bd4 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2026-01-13 01:15:42.032158 | orchestrator | e727d5097483 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2026-01-13 01:15:42.032175 | orchestrator | 7fd2e5b353eb registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-01-13 01:15:42.032182 | orchestrator | 58b0950ef285 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2026-01-13 01:15:42.032189 | orchestrator | baa8dff29690 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2026-01-13 01:15:42.032196 | orchestrator | 6b1fd66c4011 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2026-01-13 01:15:42.032214 | orchestrator | 336eb7c80b7d registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2026-01-13 01:15:42.032233 | orchestrator | ce571bae73c1 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-01-13 01:15:42.032238 | orchestrator | 8b13b6c200fb registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2026-01-13 01:15:42.032251 | orchestrator | 4852346278aa registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-01-13 01:15:42.032458 | orchestrator | b9ae4948f091 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2026-01-13 01:15:42.032467 | orchestrator | 4bfcaff86703 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2026-01-13 01:15:42.032471 | orchestrator | 9aed24fa714b registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2026-01-13 01:15:42.032474 | orchestrator | 1195733f1647 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2026-01-13 01:15:42.032479 | orchestrator | 00a03e326baf registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_central 2026-01-13 01:15:42.032484 | orchestrator | 99bf66434f3f registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2026-01-13 01:15:42.032491 | orchestrator | 0957003d948d registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2026-01-13 01:15:42.032496 | orchestrator | 438609e65c4d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2026-01-13 01:15:42.032500 | orchestrator | 340d4bad038e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2026-01-13 01:15:42.032503 | orchestrator | b674b86a5c23 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2026-01-13 01:15:42.032507 | orchestrator | 356e680e67cf registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2026-01-13 01:15:42.032511 | orchestrator | 2ac9b005a326 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2026-01-13 01:15:42.032520 | orchestrator | e2e44ea2a969 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-01-13 01:15:42.032528 | orchestrator | 03e8c3e57406 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-01-13 01:15:42.032532 | orchestrator | f18cc30a0787 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-01-13 01:15:42.032540 | orchestrator | 8ba0620e1cfb registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-01-13 01:15:42.032548 | orchestrator | c38a7f6e5bd3 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-01-13 01:15:42.032552 | orchestrator | 08e025b80e40 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2026-01-13 01:15:42.032556 | orchestrator | fc97a97867f7 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-01-13 01:15:42.032560 | orchestrator | 0fae9b7a4be8 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2026-01-13 01:15:42.032564 | orchestrator | aea789e7bffb registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2026-01-13 01:15:42.032568 | orchestrator | dc3e7dc2688f registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2026-01-13 01:15:42.032575 | orchestrator | b1897adcd8c4 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2026-01-13 01:15:42.032579 | orchestrator | 1b52b52981fa registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2026-01-13 01:15:42.032583 | orchestrator | f6d9142811d9 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2026-01-13 01:15:42.032587 | orchestrator | 1d05f2e7c1ae registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2026-01-13 01:15:42.032590 | orchestrator | 28fc385ae1c1 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2026-01-13 01:15:42.032594 | orchestrator | 84ac1be37239 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-1 2026-01-13 01:15:42.032598 | orchestrator | b7e3fe639cae registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2026-01-13 01:15:42.032602 | orchestrator | d3b0f0da827d registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2026-01-13 01:15:42.032605 | orchestrator | 04ac4ac09c22 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2026-01-13 01:15:42.032609 | orchestrator | 67fec7434b5d registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2026-01-13 01:15:42.032613 | orchestrator | 9bd790c0f9a2 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2026-01-13 01:15:42.032617 | orchestrator | 243c917244b1 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2026-01-13 01:15:42.032620 | orchestrator | 0c552c4f7a35 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2026-01-13 01:15:42.032624 | orchestrator | addc861f1229 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2026-01-13 01:15:42.032630 | orchestrator | 94d921799108 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2026-01-13 01:15:42.413667 | orchestrator | 2026-01-13 01:15:42.413723 | orchestrator | ## Images @ testbed-node-1 2026-01-13 01:15:42.413733 | orchestrator | 2026-01-13 01:15:42.413740 | orchestrator | + echo 2026-01-13 01:15:42.413746 | orchestrator | + echo '## Images @ testbed-node-1' 2026-01-13 01:15:42.413753 | orchestrator | + echo 2026-01-13 01:15:42.413760 | orchestrator | + osism container testbed-node-1 images 2026-01-13 01:15:44.954661 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-13 01:15:44.954722 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 8fb293e8afdf 22 hours ago 1.27GB 2026-01-13 01:15:44.954733 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 abdd8fb3d2fa 23 hours ago 1.02GB 2026-01-13 01:15:44.954740 | orchestrator | registry.osism.tech/kolla/cron 2024.2 18397a23bf8e 23 hours ago 271MB 2026-01-13 01:15:44.954746 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 ac6c32a007ae 23 hours ago 272MB 2026-01-13 01:15:44.954752 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 df276d2b69ca 23 hours ago 282MB 2026-01-13 01:15:44.954758 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 c93da708442d 23 hours ago 279MB 2026-01-13 01:15:44.954777 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 57c3a6a21e67 23 hours ago 417MB 2026-01-13 01:15:44.954784 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 b58edbb0aebf 23 hours ago 1.56GB 2026-01-13 01:15:44.954790 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f47e7f5ea7ea 23 hours ago 1.53GB 2026-01-13 01:15:44.954796 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 701b7d6baaed 23 hours ago 328MB 2026-01-13 01:15:44.954805 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 41bede1e920d 23 hours ago 675MB 2026-01-13 01:15:44.954811 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 7623cc118b47 23 hours ago 585MB 2026-01-13 01:15:44.954817 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 51338ce4d7d7 23 hours ago 1.16GB 2026-01-13 01:15:44.954823 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 d62c246bdd51 23 hours ago 284MB 2026-01-13 01:15:44.954829 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 3802479789de 23 hours ago 284MB 2026-01-13 01:15:44.954835 | orchestrator | registry.osism.tech/kolla/redis 2024.2 8be0abeada49 23 hours ago 278MB 2026-01-13 01:15:44.954841 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 3659f0b3baf8 23 hours ago 278MB 2026-01-13 01:15:44.954847 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 296069031ebe 23 hours ago 297MB 2026-01-13 01:15:44.954853 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 39f19aba075b 23 hours ago 304MB 2026-01-13 01:15:44.954860 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 a99e23a01ef0 23 hours ago 311MB 2026-01-13 01:15:44.954866 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 18d09bc40715 23 hours ago 306MB 2026-01-13 01:15:44.954872 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 aac9a35ae18a 23 hours ago 363MB 2026-01-13 01:15:44.954878 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 51f8ffe4ca6e 23 hours ago 458MB 2026-01-13 01:15:44.954884 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 60fd465cc253 23 hours ago 1.09GB 2026-01-13 01:15:44.954902 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 4528c8236356 23 hours ago 1.04GB 2026-01-13 01:15:44.954908 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 e06e3ea5a6dd 23 hours ago 1.05GB 2026-01-13 01:15:44.954914 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 f3982436dcf0 23 hours ago 1.41GB 2026-01-13 01:15:44.954920 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 53fbf5812506 23 hours ago 1.42GB 2026-01-13 01:15:44.954927 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 68c231276e30 23 hours ago 1.72GB 2026-01-13 01:15:44.954933 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 bf190a1bf98f 23 hours ago 1.41GB 2026-01-13 01:15:44.954939 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 a59addf52e55 23 hours ago 1.22GB 2026-01-13 01:15:44.954945 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 814d90987a6f 23 hours ago 1.22GB 2026-01-13 01:15:44.954952 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 5ad6b55b48f5 23 hours ago 1.22GB 2026-01-13 01:15:44.954958 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 4c252f4aac36 23 hours ago 1.37GB 2026-01-13 01:15:44.954964 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 418791135695 23 hours ago 981MB 2026-01-13 01:15:44.954970 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 272846600dc6 23 hours ago 1.03GB 2026-01-13 01:15:44.954987 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1283503f2d5d 23 hours ago 1.03GB 2026-01-13 01:15:44.954994 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 69111ffbeefe 23 hours ago 1.03GB 2026-01-13 01:15:44.955001 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 9c00e76773d5 23 hours ago 1.06GB 2026-01-13 01:15:44.955008 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 ce8f95c27da6 23 hours ago 1.06GB 2026-01-13 01:15:44.955014 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 4d2bc901d4e5 23 hours ago 1.13GB 2026-01-13 01:15:44.955021 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 d96c5aaa22bc 23 hours ago 1.25GB 2026-01-13 01:15:44.955036 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 35d6a1495d78 23 hours ago 1.17GB 2026-01-13 01:15:44.955042 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 7bcfdc7255dd 23 hours ago 994MB 2026-01-13 01:15:44.955048 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 c916d4869f2e 23 hours ago 989MB 2026-01-13 01:15:44.955055 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 2de34a8fe642 23 hours ago 990MB 2026-01-13 01:15:44.955061 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 bdd612831e36 23 hours ago 990MB 2026-01-13 01:15:44.955068 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 2e6c856929d9 23 hours ago 990MB 2026-01-13 01:15:44.955075 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 89d7df329e57 23 hours ago 994MB 2026-01-13 01:15:44.955082 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 dbc6b3f69c5d 23 hours ago 997MB 2026-01-13 01:15:44.955088 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9a0da1090b9d 23 hours ago 996MB 2026-01-13 01:15:44.955094 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 e83ecb5b3f20 23 hours ago 997MB 2026-01-13 01:15:44.955100 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 b64cd2c2f9fc 23 hours ago 1.1GB 2026-01-13 01:15:44.955106 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 6b3dfa3e1db7 23 hours ago 846MB 2026-01-13 01:15:44.955119 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 954866669b8c 23 hours ago 846MB 2026-01-13 01:15:44.955129 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 8724a678703f 23 hours ago 846MB 2026-01-13 01:15:44.955136 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 2e1bcc6f62cb 23 hours ago 846MB 2026-01-13 01:15:45.370670 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-13 01:15:45.370807 | orchestrator | ++ semver latest 5.0.0 2026-01-13 01:15:45.430341 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-13 01:15:45.430558 | orchestrator | 2026-01-13 01:15:45.430578 | orchestrator | ## Containers @ testbed-node-2 2026-01-13 01:15:45.430586 | orchestrator | 2026-01-13 01:15:45.430593 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-13 01:15:45.430600 | orchestrator | + echo 2026-01-13 01:15:45.430609 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-01-13 01:15:45.430617 | orchestrator | + echo 2026-01-13 01:15:45.430625 | orchestrator | + osism container testbed-node-2 ps 2026-01-13 01:15:47.855654 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-13 01:15:47.855697 | orchestrator | a81c4b68fe98 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-13 01:15:47.855702 | orchestrator | c86f5574a257 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-13 01:15:47.855706 | orchestrator | 821a9d333f94 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-01-13 01:15:47.855709 | orchestrator | 00bd8e2743a0 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-01-13 01:15:47.855712 | orchestrator | efb72eae51a6 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-13 01:15:47.855715 | orchestrator | fff5f09e2d33 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2026-01-13 01:15:47.855718 | orchestrator | 75b6386ec898 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-01-13 01:15:47.855721 | orchestrator | 170195c7e378 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-01-13 01:15:47.855725 | orchestrator | 5ce179670754 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2026-01-13 01:15:47.855728 | orchestrator | e9fa2a8c6561 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-13 01:15:47.855731 | orchestrator | 5a0070b70b0e registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_backup 2026-01-13 01:15:47.855734 | orchestrator | eed660f315c7 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_volume 2026-01-13 01:15:47.855737 | orchestrator | d4ebfecf6db8 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2026-01-13 01:15:47.855741 | orchestrator | e0d2df62cfc0 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2026-01-13 01:15:47.855751 | orchestrator | eadbd876d04a registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-01-13 01:15:47.855754 | orchestrator | 642cbc96058d registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2026-01-13 01:15:47.855758 | orchestrator | c007ac101a92 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2026-01-13 01:15:47.855761 | orchestrator | ca7cef65ce90 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2026-01-13 01:15:47.855765 | orchestrator | 759c3a94e90c registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2026-01-13 01:15:47.855768 | orchestrator | 42f151ffaa33 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2026-01-13 01:15:47.855771 | orchestrator | 4f47f34f6acb registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2026-01-13 01:15:47.855780 | orchestrator | 9819d4c0621d registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-01-13 01:15:47.855784 | orchestrator | 9ba1c3cfc4e0 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2026-01-13 01:15:47.855787 | orchestrator | 81bb6c51b709 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2026-01-13 01:15:47.855790 | orchestrator | a7cfb21e0460 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2026-01-13 01:15:47.855793 | orchestrator | 677a1f3fdd63 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2026-01-13 01:15:47.855796 | orchestrator | bb326272f15a registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2026-01-13 01:15:47.855799 | orchestrator | 3dfa91430cf3 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2026-01-13 01:15:47.855802 | orchestrator | 42c3971145b0 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2026-01-13 01:15:47.855808 | orchestrator | f678a7188e7c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2026-01-13 01:15:47.855812 | orchestrator | 420acd86d14e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2026-01-13 01:15:47.855815 | orchestrator | a9d4a3c52d07 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2026-01-13 01:15:47.855818 | orchestrator | 29cbcaa8bf53 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2026-01-13 01:15:47.855823 | orchestrator | 94b800366ebb registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2026-01-13 01:15:47.855826 | orchestrator | aae1d7a3322a registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-01-13 01:15:47.855829 | orchestrator | 454a7bc660a9 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-01-13 01:15:47.855832 | orchestrator | d5c0029d8b42 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-01-13 01:15:47.855836 | orchestrator | d0e831db5c33 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-01-13 01:15:47.855839 | orchestrator | 99beef20bac9 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-01-13 01:15:47.855842 | orchestrator | 3010d9f01137 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2026-01-13 01:15:47.855845 | orchestrator | c8aa6760e44c registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-01-13 01:15:47.855849 | orchestrator | b5b39c44de9e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2026-01-13 01:15:47.855852 | orchestrator | e5ddb7f2fe7f registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2026-01-13 01:15:47.855855 | orchestrator | 8c345e3d480f registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2026-01-13 01:15:47.855859 | orchestrator | 6ab1ede542d6 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2026-01-13 01:15:47.855863 | orchestrator | 01fc0f6f50b2 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2026-01-13 01:15:47.855866 | orchestrator | ec87d6288525 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2026-01-13 01:15:47.855869 | orchestrator | 54acb50e9e5d registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2026-01-13 01:15:47.855872 | orchestrator | fb40cfc2dc34 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2026-01-13 01:15:47.855875 | orchestrator | a69ff4dd3c75 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2026-01-13 01:15:47.855878 | orchestrator | 01f2672e35a4 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2026-01-13 01:15:47.855881 | orchestrator | 07229e97d6f6 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2026-01-13 01:15:47.855884 | orchestrator | 5fb37a845f67 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2026-01-13 01:15:47.855891 | orchestrator | 655e2208ef2e registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2026-01-13 01:15:47.855894 | orchestrator | 517d5e35db12 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2026-01-13 01:15:47.855897 | orchestrator | 2e9774371971 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2026-01-13 01:15:47.855900 | orchestrator | c3e66589d1b9 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2026-01-13 01:15:47.855903 | orchestrator | 39a23c099af8 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2026-01-13 01:15:47.855906 | orchestrator | b0f8a3512151 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2026-01-13 01:15:48.192015 | orchestrator | 2026-01-13 01:15:48.192069 | orchestrator | ## Images @ testbed-node-2 2026-01-13 01:15:48.192076 | orchestrator | 2026-01-13 01:15:48.192081 | orchestrator | + echo 2026-01-13 01:15:48.192086 | orchestrator | + echo '## Images @ testbed-node-2' 2026-01-13 01:15:48.192092 | orchestrator | + echo 2026-01-13 01:15:48.192097 | orchestrator | + osism container testbed-node-2 images 2026-01-13 01:15:50.544088 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-13 01:15:50.544141 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 8fb293e8afdf 22 hours ago 1.27GB 2026-01-13 01:15:50.544146 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 abdd8fb3d2fa 23 hours ago 1.02GB 2026-01-13 01:15:50.544150 | orchestrator | registry.osism.tech/kolla/cron 2024.2 18397a23bf8e 23 hours ago 271MB 2026-01-13 01:15:50.544153 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 df276d2b69ca 23 hours ago 282MB 2026-01-13 01:15:50.544156 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 ac6c32a007ae 23 hours ago 272MB 2026-01-13 01:15:50.544159 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 c93da708442d 23 hours ago 279MB 2026-01-13 01:15:50.544163 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 57c3a6a21e67 23 hours ago 417MB 2026-01-13 01:15:50.544166 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 b58edbb0aebf 23 hours ago 1.56GB 2026-01-13 01:15:50.544169 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f47e7f5ea7ea 23 hours ago 1.53GB 2026-01-13 01:15:50.544172 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 701b7d6baaed 23 hours ago 328MB 2026-01-13 01:15:50.544203 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 41bede1e920d 23 hours ago 675MB 2026-01-13 01:15:50.544230 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 7623cc118b47 23 hours ago 585MB 2026-01-13 01:15:50.544239 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 51338ce4d7d7 23 hours ago 1.16GB 2026-01-13 01:15:50.544244 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 d62c246bdd51 23 hours ago 284MB 2026-01-13 01:15:50.544254 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 3802479789de 23 hours ago 284MB 2026-01-13 01:15:50.544262 | orchestrator | registry.osism.tech/kolla/redis 2024.2 8be0abeada49 23 hours ago 278MB 2026-01-13 01:15:50.544268 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 3659f0b3baf8 23 hours ago 278MB 2026-01-13 01:15:50.544273 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 296069031ebe 23 hours ago 297MB 2026-01-13 01:15:50.544289 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 39f19aba075b 23 hours ago 304MB 2026-01-13 01:15:50.544295 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 a99e23a01ef0 23 hours ago 311MB 2026-01-13 01:15:50.544300 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 18d09bc40715 23 hours ago 306MB 2026-01-13 01:15:50.544305 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 aac9a35ae18a 23 hours ago 363MB 2026-01-13 01:15:50.544310 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 51f8ffe4ca6e 23 hours ago 458MB 2026-01-13 01:15:50.544315 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 60fd465cc253 23 hours ago 1.09GB 2026-01-13 01:15:50.544319 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 4528c8236356 23 hours ago 1.04GB 2026-01-13 01:15:50.544334 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 e06e3ea5a6dd 23 hours ago 1.05GB 2026-01-13 01:15:50.544340 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 f3982436dcf0 23 hours ago 1.41GB 2026-01-13 01:15:50.544344 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 53fbf5812506 23 hours ago 1.42GB 2026-01-13 01:15:50.544349 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 68c231276e30 23 hours ago 1.72GB 2026-01-13 01:15:50.544354 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 bf190a1bf98f 23 hours ago 1.41GB 2026-01-13 01:15:50.544359 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 a59addf52e55 23 hours ago 1.22GB 2026-01-13 01:15:50.544364 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 814d90987a6f 23 hours ago 1.22GB 2026-01-13 01:15:50.544369 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 5ad6b55b48f5 23 hours ago 1.22GB 2026-01-13 01:15:50.544374 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 4c252f4aac36 23 hours ago 1.37GB 2026-01-13 01:15:50.544379 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 418791135695 23 hours ago 981MB 2026-01-13 01:15:50.544383 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 272846600dc6 23 hours ago 1.03GB 2026-01-13 01:15:50.544410 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1283503f2d5d 23 hours ago 1.03GB 2026-01-13 01:15:50.544416 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 69111ffbeefe 23 hours ago 1.03GB 2026-01-13 01:15:50.544425 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 9c00e76773d5 23 hours ago 1.06GB 2026-01-13 01:15:50.544430 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 ce8f95c27da6 23 hours ago 1.06GB 2026-01-13 01:15:50.544467 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 4d2bc901d4e5 23 hours ago 1.13GB 2026-01-13 01:15:50.544473 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 d96c5aaa22bc 23 hours ago 1.25GB 2026-01-13 01:15:50.544476 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 35d6a1495d78 23 hours ago 1.17GB 2026-01-13 01:15:50.544479 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 7bcfdc7255dd 23 hours ago 994MB 2026-01-13 01:15:50.544483 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 c916d4869f2e 23 hours ago 989MB 2026-01-13 01:15:50.544486 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 2de34a8fe642 23 hours ago 990MB 2026-01-13 01:15:50.544489 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 bdd612831e36 23 hours ago 990MB 2026-01-13 01:15:50.544497 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 2e6c856929d9 23 hours ago 990MB 2026-01-13 01:15:50.544500 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 89d7df329e57 23 hours ago 994MB 2026-01-13 01:15:50.544503 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 dbc6b3f69c5d 23 hours ago 997MB 2026-01-13 01:15:50.544506 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9a0da1090b9d 23 hours ago 996MB 2026-01-13 01:15:50.544509 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 e83ecb5b3f20 23 hours ago 997MB 2026-01-13 01:15:50.544512 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 b64cd2c2f9fc 23 hours ago 1.1GB 2026-01-13 01:15:50.544515 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 6b3dfa3e1db7 23 hours ago 846MB 2026-01-13 01:15:50.544518 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 954866669b8c 23 hours ago 846MB 2026-01-13 01:15:50.544521 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 8724a678703f 23 hours ago 846MB 2026-01-13 01:15:50.544524 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 2e1bcc6f62cb 23 hours ago 846MB 2026-01-13 01:15:50.932372 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-01-13 01:15:50.946724 | orchestrator | + set -e 2026-01-13 01:15:50.946784 | orchestrator | + source /opt/manager-vars.sh 2026-01-13 01:15:50.947682 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-13 01:15:50.947715 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-13 01:15:50.947721 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-13 01:15:50.947726 | orchestrator | ++ CEPH_VERSION=reef 2026-01-13 01:15:50.947731 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-13 01:15:50.947737 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-13 01:15:50.947742 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-13 01:15:50.947747 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-13 01:15:50.947752 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-13 01:15:50.947757 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-13 01:15:50.947774 | orchestrator | ++ export ARA=false 2026-01-13 01:15:50.947780 | orchestrator | ++ ARA=false 2026-01-13 01:15:50.947785 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-13 01:15:50.947790 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-13 01:15:50.947794 | orchestrator | ++ export TEMPEST=true 2026-01-13 01:15:50.947799 | orchestrator | ++ TEMPEST=true 2026-01-13 01:15:50.947804 | orchestrator | ++ export IS_ZUUL=true 2026-01-13 01:15:50.947809 | orchestrator | ++ IS_ZUUL=true 2026-01-13 01:15:50.947814 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.234 2026-01-13 01:15:50.947819 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.234 2026-01-13 01:15:50.947824 | orchestrator | ++ export EXTERNAL_API=false 2026-01-13 01:15:50.947829 | orchestrator | ++ EXTERNAL_API=false 2026-01-13 01:15:50.947834 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-13 01:15:50.947839 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-13 01:15:50.947844 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-13 01:15:50.947849 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-13 01:15:50.947881 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-13 01:15:50.947887 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-13 01:15:50.947892 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-13 01:15:50.947897 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-01-13 01:15:50.958743 | orchestrator | + set -e 2026-01-13 01:15:50.958793 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-13 01:15:50.958798 | orchestrator | ++ export INTERACTIVE=false 2026-01-13 01:15:50.958802 | orchestrator | ++ INTERACTIVE=false 2026-01-13 01:15:50.958806 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-13 01:15:50.958809 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-13 01:15:50.958812 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-13 01:15:50.960044 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-13 01:15:50.966573 | orchestrator | 2026-01-13 01:15:50.966623 | orchestrator | # Ceph status 2026-01-13 01:15:50.966627 | orchestrator | 2026-01-13 01:15:50.966631 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-13 01:15:50.966635 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-13 01:15:50.966639 | orchestrator | + echo 2026-01-13 01:15:50.966653 | orchestrator | + echo '# Ceph status' 2026-01-13 01:15:50.966657 | orchestrator | + echo 2026-01-13 01:15:50.966660 | orchestrator | + ceph -s 2026-01-13 01:15:51.573088 | orchestrator | cluster: 2026-01-13 01:15:51.573143 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-01-13 01:15:51.573152 | orchestrator | health: HEALTH_OK 2026-01-13 01:15:51.573158 | orchestrator | 2026-01-13 01:15:51.573163 | orchestrator | services: 2026-01-13 01:15:51.573169 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2026-01-13 01:15:51.573175 | orchestrator | mgr: testbed-node-2(active, since 14m), standbys: testbed-node-1, testbed-node-0 2026-01-13 01:15:51.573180 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-01-13 01:15:51.573185 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 24m) 2026-01-13 01:15:51.573190 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-01-13 01:15:51.573195 | orchestrator | 2026-01-13 01:15:51.573201 | orchestrator | data: 2026-01-13 01:15:51.573206 | orchestrator | volumes: 1/1 healthy 2026-01-13 01:15:51.573279 | orchestrator | pools: 14 pools, 401 pgs 2026-01-13 01:15:51.573285 | orchestrator | objects: 552 objects, 2.2 GiB 2026-01-13 01:15:51.573290 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-01-13 01:15:51.573295 | orchestrator | pgs: 401 active+clean 2026-01-13 01:15:51.573300 | orchestrator | 2026-01-13 01:15:51.628601 | orchestrator | 2026-01-13 01:15:51.628648 | orchestrator | # Ceph versions 2026-01-13 01:15:51.628655 | orchestrator | 2026-01-13 01:15:51.628661 | orchestrator | + echo 2026-01-13 01:15:51.628666 | orchestrator | + echo '# Ceph versions' 2026-01-13 01:15:51.628673 | orchestrator | + echo 2026-01-13 01:15:51.628678 | orchestrator | + ceph versions 2026-01-13 01:15:52.211953 | orchestrator | { 2026-01-13 01:15:52.212018 | orchestrator | "mon": { 2026-01-13 01:15:52.212027 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-13 01:15:52.212032 | orchestrator | }, 2026-01-13 01:15:52.212038 | orchestrator | "mgr": { 2026-01-13 01:15:52.212043 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-13 01:15:52.212048 | orchestrator | }, 2026-01-13 01:15:52.212053 | orchestrator | "osd": { 2026-01-13 01:15:52.212058 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-01-13 01:15:52.212062 | orchestrator | }, 2026-01-13 01:15:52.212067 | orchestrator | "mds": { 2026-01-13 01:15:52.212073 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-13 01:15:52.212078 | orchestrator | }, 2026-01-13 01:15:52.212083 | orchestrator | "rgw": { 2026-01-13 01:15:52.212102 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-13 01:15:52.212107 | orchestrator | }, 2026-01-13 01:15:52.212112 | orchestrator | "overall": { 2026-01-13 01:15:52.212117 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-01-13 01:15:52.212122 | orchestrator | } 2026-01-13 01:15:52.212128 | orchestrator | } 2026-01-13 01:15:52.260173 | orchestrator | 2026-01-13 01:15:52.260254 | orchestrator | # Ceph OSD tree 2026-01-13 01:15:52.260263 | orchestrator | 2026-01-13 01:15:52.260269 | orchestrator | + echo 2026-01-13 01:15:52.260274 | orchestrator | + echo '# Ceph OSD tree' 2026-01-13 01:15:52.260280 | orchestrator | + echo 2026-01-13 01:15:52.260285 | orchestrator | + ceph osd df tree 2026-01-13 01:15:52.769475 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-01-13 01:15:52.769553 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-01-13 01:15:52.769559 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-01-13 01:15:52.769564 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 1 KiB 70 MiB 18 GiB 7.70 1.30 200 up osd.0 2026-01-13 01:15:52.769570 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 844 MiB 771 MiB 1 KiB 74 MiB 19 GiB 4.13 0.70 190 up osd.4 2026-01-13 01:15:52.769574 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-01-13 01:15:52.769579 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 74 MiB 19 GiB 7.38 1.25 206 up osd.2 2026-01-13 01:15:52.769612 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 912 MiB 843 MiB 1 KiB 70 MiB 19 GiB 4.46 0.75 186 up osd.5 2026-01-13 01:15:52.769618 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-01-13 01:15:52.769622 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 7.10 1.20 184 up osd.1 2026-01-13 01:15:52.769627 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 969 MiB 899 MiB 1 KiB 70 MiB 19 GiB 4.74 0.80 204 up osd.3 2026-01-13 01:15:52.769632 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-01-13 01:15:52.769636 | orchestrator | MIN/MAX VAR: 0.70/1.30 STDDEV: 1.50 2026-01-13 01:15:52.817610 | orchestrator | 2026-01-13 01:15:52.817670 | orchestrator | # Ceph monitor status 2026-01-13 01:15:52.817679 | orchestrator | 2026-01-13 01:15:52.817684 | orchestrator | + echo 2026-01-13 01:15:52.817690 | orchestrator | + echo '# Ceph monitor status' 2026-01-13 01:15:52.817695 | orchestrator | + echo 2026-01-13 01:15:52.817700 | orchestrator | + ceph mon stat 2026-01-13 01:15:53.359810 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-01-13 01:15:53.403658 | orchestrator | 2026-01-13 01:15:53.403716 | orchestrator | # Ceph quorum status 2026-01-13 01:15:53.403721 | orchestrator | 2026-01-13 01:15:53.403725 | orchestrator | + echo 2026-01-13 01:15:53.403728 | orchestrator | + echo '# Ceph quorum status' 2026-01-13 01:15:53.403732 | orchestrator | + echo 2026-01-13 01:15:53.404332 | orchestrator | + ceph quorum_status 2026-01-13 01:15:53.404370 | orchestrator | + jq 2026-01-13 01:15:54.016983 | orchestrator | { 2026-01-13 01:15:54.017157 | orchestrator | "election_epoch": 8, 2026-01-13 01:15:54.017177 | orchestrator | "quorum": [ 2026-01-13 01:15:54.017184 | orchestrator | 0, 2026-01-13 01:15:54.017190 | orchestrator | 1, 2026-01-13 01:15:54.017196 | orchestrator | 2 2026-01-13 01:15:54.017202 | orchestrator | ], 2026-01-13 01:15:54.017303 | orchestrator | "quorum_names": [ 2026-01-13 01:15:54.017310 | orchestrator | "testbed-node-0", 2026-01-13 01:15:54.017316 | orchestrator | "testbed-node-1", 2026-01-13 01:15:54.017324 | orchestrator | "testbed-node-2" 2026-01-13 01:15:54.017329 | orchestrator | ], 2026-01-13 01:15:54.017336 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-01-13 01:15:54.017343 | orchestrator | "quorum_age": 1656, 2026-01-13 01:15:54.017349 | orchestrator | "features": { 2026-01-13 01:15:54.017356 | orchestrator | "quorum_con": "4540138322906710015", 2026-01-13 01:15:54.017362 | orchestrator | "quorum_mon": [ 2026-01-13 01:15:54.017369 | orchestrator | "kraken", 2026-01-13 01:15:54.017375 | orchestrator | "luminous", 2026-01-13 01:15:54.017381 | orchestrator | "mimic", 2026-01-13 01:15:54.017387 | orchestrator | "osdmap-prune", 2026-01-13 01:15:54.017393 | orchestrator | "nautilus", 2026-01-13 01:15:54.017399 | orchestrator | "octopus", 2026-01-13 01:15:54.017405 | orchestrator | "pacific", 2026-01-13 01:15:54.017411 | orchestrator | "elector-pinging", 2026-01-13 01:15:54.017417 | orchestrator | "quincy", 2026-01-13 01:15:54.017423 | orchestrator | "reef" 2026-01-13 01:15:54.017430 | orchestrator | ] 2026-01-13 01:15:54.017436 | orchestrator | }, 2026-01-13 01:15:54.017442 | orchestrator | "monmap": { 2026-01-13 01:15:54.017449 | orchestrator | "epoch": 1, 2026-01-13 01:15:54.017455 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-01-13 01:15:54.017462 | orchestrator | "modified": "2026-01-13T00:47:57.735690Z", 2026-01-13 01:15:54.017468 | orchestrator | "created": "2026-01-13T00:47:57.735690Z", 2026-01-13 01:15:54.017474 | orchestrator | "min_mon_release": 18, 2026-01-13 01:15:54.017480 | orchestrator | "min_mon_release_name": "reef", 2026-01-13 01:15:54.017486 | orchestrator | "election_strategy": 1, 2026-01-13 01:15:54.017492 | orchestrator | "disallowed_leaders: ": "", 2026-01-13 01:15:54.017498 | orchestrator | "stretch_mode": false, 2026-01-13 01:15:54.017504 | orchestrator | "tiebreaker_mon": "", 2026-01-13 01:15:54.017510 | orchestrator | "removed_ranks: ": "", 2026-01-13 01:15:54.017516 | orchestrator | "features": { 2026-01-13 01:15:54.017522 | orchestrator | "persistent": [ 2026-01-13 01:15:54.017547 | orchestrator | "kraken", 2026-01-13 01:15:54.017553 | orchestrator | "luminous", 2026-01-13 01:15:54.017558 | orchestrator | "mimic", 2026-01-13 01:15:54.017563 | orchestrator | "osdmap-prune", 2026-01-13 01:15:54.017569 | orchestrator | "nautilus", 2026-01-13 01:15:54.017574 | orchestrator | "octopus", 2026-01-13 01:15:54.017579 | orchestrator | "pacific", 2026-01-13 01:15:54.017584 | orchestrator | "elector-pinging", 2026-01-13 01:15:54.017589 | orchestrator | "quincy", 2026-01-13 01:15:54.017594 | orchestrator | "reef" 2026-01-13 01:15:54.017600 | orchestrator | ], 2026-01-13 01:15:54.017606 | orchestrator | "optional": [] 2026-01-13 01:15:54.017611 | orchestrator | }, 2026-01-13 01:15:54.017616 | orchestrator | "mons": [ 2026-01-13 01:15:54.017622 | orchestrator | { 2026-01-13 01:15:54.017627 | orchestrator | "rank": 0, 2026-01-13 01:15:54.017642 | orchestrator | "name": "testbed-node-0", 2026-01-13 01:15:54.017648 | orchestrator | "public_addrs": { 2026-01-13 01:15:54.017653 | orchestrator | "addrvec": [ 2026-01-13 01:15:54.017658 | orchestrator | { 2026-01-13 01:15:54.017664 | orchestrator | "type": "v2", 2026-01-13 01:15:54.017669 | orchestrator | "addr": "192.168.16.10:3300", 2026-01-13 01:15:54.017675 | orchestrator | "nonce": 0 2026-01-13 01:15:54.017680 | orchestrator | }, 2026-01-13 01:15:54.017685 | orchestrator | { 2026-01-13 01:15:54.017691 | orchestrator | "type": "v1", 2026-01-13 01:15:54.017696 | orchestrator | "addr": "192.168.16.10:6789", 2026-01-13 01:15:54.017702 | orchestrator | "nonce": 0 2026-01-13 01:15:54.017707 | orchestrator | } 2026-01-13 01:15:54.017713 | orchestrator | ] 2026-01-13 01:15:54.017718 | orchestrator | }, 2026-01-13 01:15:54.017723 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-01-13 01:15:54.017728 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-01-13 01:15:54.017733 | orchestrator | "priority": 0, 2026-01-13 01:15:54.017739 | orchestrator | "weight": 0, 2026-01-13 01:15:54.017744 | orchestrator | "crush_location": "{}" 2026-01-13 01:15:54.017748 | orchestrator | }, 2026-01-13 01:15:54.017753 | orchestrator | { 2026-01-13 01:15:54.017759 | orchestrator | "rank": 1, 2026-01-13 01:15:54.017764 | orchestrator | "name": "testbed-node-1", 2026-01-13 01:15:54.017768 | orchestrator | "public_addrs": { 2026-01-13 01:15:54.017773 | orchestrator | "addrvec": [ 2026-01-13 01:15:54.017779 | orchestrator | { 2026-01-13 01:15:54.017784 | orchestrator | "type": "v2", 2026-01-13 01:15:54.017790 | orchestrator | "addr": "192.168.16.11:3300", 2026-01-13 01:15:54.017795 | orchestrator | "nonce": 0 2026-01-13 01:15:54.017800 | orchestrator | }, 2026-01-13 01:15:54.017805 | orchestrator | { 2026-01-13 01:15:54.017809 | orchestrator | "type": "v1", 2026-01-13 01:15:54.017815 | orchestrator | "addr": "192.168.16.11:6789", 2026-01-13 01:15:54.017820 | orchestrator | "nonce": 0 2026-01-13 01:15:54.017826 | orchestrator | } 2026-01-13 01:15:54.017832 | orchestrator | ] 2026-01-13 01:15:54.017837 | orchestrator | }, 2026-01-13 01:15:54.017843 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-01-13 01:15:54.017849 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-01-13 01:15:54.017855 | orchestrator | "priority": 0, 2026-01-13 01:15:54.017861 | orchestrator | "weight": 0, 2026-01-13 01:15:54.017866 | orchestrator | "crush_location": "{}" 2026-01-13 01:15:54.017871 | orchestrator | }, 2026-01-13 01:15:54.017876 | orchestrator | { 2026-01-13 01:15:54.017881 | orchestrator | "rank": 2, 2026-01-13 01:15:54.017887 | orchestrator | "name": "testbed-node-2", 2026-01-13 01:15:54.017892 | orchestrator | "public_addrs": { 2026-01-13 01:15:54.017897 | orchestrator | "addrvec": [ 2026-01-13 01:15:54.017902 | orchestrator | { 2026-01-13 01:15:54.017908 | orchestrator | "type": "v2", 2026-01-13 01:15:54.017913 | orchestrator | "addr": "192.168.16.12:3300", 2026-01-13 01:15:54.017918 | orchestrator | "nonce": 0 2026-01-13 01:15:54.017924 | orchestrator | }, 2026-01-13 01:15:54.017929 | orchestrator | { 2026-01-13 01:15:54.017935 | orchestrator | "type": "v1", 2026-01-13 01:15:54.017943 | orchestrator | "addr": "192.168.16.12:6789", 2026-01-13 01:15:54.017949 | orchestrator | "nonce": 0 2026-01-13 01:15:54.017954 | orchestrator | } 2026-01-13 01:15:54.017960 | orchestrator | ] 2026-01-13 01:15:54.017965 | orchestrator | }, 2026-01-13 01:15:54.017970 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-01-13 01:15:54.017975 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-01-13 01:15:54.017989 | orchestrator | "priority": 0, 2026-01-13 01:15:54.017995 | orchestrator | "weight": 0, 2026-01-13 01:15:54.018001 | orchestrator | "crush_location": "{}" 2026-01-13 01:15:54.018006 | orchestrator | } 2026-01-13 01:15:54.018059 | orchestrator | ] 2026-01-13 01:15:54.018068 | orchestrator | } 2026-01-13 01:15:54.018074 | orchestrator | } 2026-01-13 01:15:54.018080 | orchestrator | 2026-01-13 01:15:54.018086 | orchestrator | # Ceph free space status 2026-01-13 01:15:54.018092 | orchestrator | 2026-01-13 01:15:54.018098 | orchestrator | + echo 2026-01-13 01:15:54.018104 | orchestrator | + echo '# Ceph free space status' 2026-01-13 01:15:54.018110 | orchestrator | + echo 2026-01-13 01:15:54.018117 | orchestrator | + ceph df 2026-01-13 01:15:54.571068 | orchestrator | --- RAW STORAGE --- 2026-01-13 01:15:54.571130 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-01-13 01:15:54.571145 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-01-13 01:15:54.571151 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-01-13 01:15:54.571156 | orchestrator | 2026-01-13 01:15:54.571162 | orchestrator | --- POOLS --- 2026-01-13 01:15:54.571167 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-01-13 01:15:54.571173 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-01-13 01:15:54.571179 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-01-13 01:15:54.571183 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-01-13 01:15:54.571188 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-01-13 01:15:54.571193 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-01-13 01:15:54.571198 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-01-13 01:15:54.571216 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-01-13 01:15:54.571222 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-01-13 01:15:54.571227 | orchestrator | .rgw.root 9 32 1.4 KiB 4 32 KiB 0 52 GiB 2026-01-13 01:15:54.571233 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-01-13 01:15:54.571248 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-01-13 01:15:54.571253 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2026-01-13 01:15:54.571258 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-01-13 01:15:54.571264 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-01-13 01:15:54.620144 | orchestrator | ++ semver latest 5.0.0 2026-01-13 01:15:54.671642 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-13 01:15:54.671698 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-13 01:15:54.671705 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-01-13 01:15:54.671710 | orchestrator | + osism apply facts 2026-01-13 01:16:06.800541 | orchestrator | 2026-01-13 01:16:06 | INFO  | Task a66d0131-6367-4bd5-a2fd-6d1bec74496b (facts) was prepared for execution. 2026-01-13 01:16:06.800604 | orchestrator | 2026-01-13 01:16:06 | INFO  | It takes a moment until task a66d0131-6367-4bd5-a2fd-6d1bec74496b (facts) has been started and output is visible here. 2026-01-13 01:16:22.524546 | orchestrator | 2026-01-13 01:16:22.524602 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-13 01:16:22.524608 | orchestrator | 2026-01-13 01:16:22.524613 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-13 01:16:22.524617 | orchestrator | Tuesday 13 January 2026 01:16:11 +0000 (0:00:00.266) 0:00:00.266 ******* 2026-01-13 01:16:22.524621 | orchestrator | ok: [testbed-manager] 2026-01-13 01:16:22.524625 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:22.524629 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:16:22.524633 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:16:22.524637 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:16:22.524640 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:16:22.524644 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:16:22.524676 | orchestrator | 2026-01-13 01:16:22.524680 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-13 01:16:22.524684 | orchestrator | Tuesday 13 January 2026 01:16:12 +0000 (0:00:01.377) 0:00:01.644 ******* 2026-01-13 01:16:22.524688 | orchestrator | skipping: [testbed-manager] 2026-01-13 01:16:22.524693 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:22.524696 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:16:22.524700 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:16:22.524704 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:16:22.524707 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:16:22.524711 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:16:22.524715 | orchestrator | 2026-01-13 01:16:22.524719 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-13 01:16:22.524722 | orchestrator | 2026-01-13 01:16:22.524726 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-13 01:16:22.524730 | orchestrator | Tuesday 13 January 2026 01:16:13 +0000 (0:00:01.415) 0:00:03.059 ******* 2026-01-13 01:16:22.524734 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:16:22.524737 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:22.524741 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:16:22.524745 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:16:22.524756 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:16:22.524760 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:16:22.524764 | orchestrator | ok: [testbed-manager] 2026-01-13 01:16:22.524768 | orchestrator | 2026-01-13 01:16:22.524772 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-13 01:16:22.524776 | orchestrator | 2026-01-13 01:16:22.524779 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-13 01:16:22.524783 | orchestrator | Tuesday 13 January 2026 01:16:21 +0000 (0:00:07.560) 0:00:10.620 ******* 2026-01-13 01:16:22.524787 | orchestrator | skipping: [testbed-manager] 2026-01-13 01:16:22.524791 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:22.524794 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:16:22.524798 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:16:22.524802 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:16:22.524805 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:16:22.524809 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:16:22.524813 | orchestrator | 2026-01-13 01:16:22.524817 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:16:22.524821 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 01:16:22.524825 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 01:16:22.524829 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 01:16:22.524833 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 01:16:22.524836 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 01:16:22.524840 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 01:16:22.524844 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 01:16:22.524847 | orchestrator | 2026-01-13 01:16:22.524851 | orchestrator | 2026-01-13 01:16:22.524855 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:16:22.524859 | orchestrator | Tuesday 13 January 2026 01:16:22 +0000 (0:00:00.545) 0:00:11.166 ******* 2026-01-13 01:16:22.524863 | orchestrator | =============================================================================== 2026-01-13 01:16:22.524869 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.56s 2026-01-13 01:16:22.524873 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.42s 2026-01-13 01:16:22.524877 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.38s 2026-01-13 01:16:22.524880 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-01-13 01:16:22.909498 | orchestrator | + osism validate ceph-mons 2026-01-13 01:16:55.066285 | orchestrator | 2026-01-13 01:16:55.066350 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-01-13 01:16:55.066357 | orchestrator | 2026-01-13 01:16:55.066361 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-13 01:16:55.066365 | orchestrator | Tuesday 13 January 2026 01:16:39 +0000 (0:00:00.421) 0:00:00.421 ******* 2026-01-13 01:16:55.066370 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-13 01:16:55.066373 | orchestrator | 2026-01-13 01:16:55.066377 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-13 01:16:55.066381 | orchestrator | Tuesday 13 January 2026 01:16:40 +0000 (0:00:00.777) 0:00:01.198 ******* 2026-01-13 01:16:55.066385 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-13 01:16:55.066389 | orchestrator | 2026-01-13 01:16:55.066393 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-13 01:16:55.066396 | orchestrator | Tuesday 13 January 2026 01:16:41 +0000 (0:00:00.933) 0:00:02.131 ******* 2026-01-13 01:16:55.066400 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:55.066405 | orchestrator | 2026-01-13 01:16:55.066408 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-13 01:16:55.066412 | orchestrator | Tuesday 13 January 2026 01:16:41 +0000 (0:00:00.129) 0:00:02.261 ******* 2026-01-13 01:16:55.066416 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:55.066420 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:16:55.066424 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:16:55.066427 | orchestrator | 2026-01-13 01:16:55.066439 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-13 01:16:55.066444 | orchestrator | Tuesday 13 January 2026 01:16:41 +0000 (0:00:00.279) 0:00:02.540 ******* 2026-01-13 01:16:55.066447 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:16:55.066451 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:55.066455 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:16:55.066458 | orchestrator | 2026-01-13 01:16:55.066462 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-13 01:16:55.066466 | orchestrator | Tuesday 13 January 2026 01:16:42 +0000 (0:00:00.975) 0:00:03.515 ******* 2026-01-13 01:16:55.066470 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:55.066473 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:16:55.066477 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:16:55.066481 | orchestrator | 2026-01-13 01:16:55.066485 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-13 01:16:55.066489 | orchestrator | Tuesday 13 January 2026 01:16:43 +0000 (0:00:00.288) 0:00:03.804 ******* 2026-01-13 01:16:55.066492 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:55.066496 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:16:55.066502 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:16:55.066519 | orchestrator | 2026-01-13 01:16:55.066527 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-13 01:16:55.066532 | orchestrator | Tuesday 13 January 2026 01:16:43 +0000 (0:00:00.485) 0:00:04.289 ******* 2026-01-13 01:16:55.066539 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:55.066544 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:16:55.066551 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:16:55.066557 | orchestrator | 2026-01-13 01:16:55.066563 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-01-13 01:16:55.066569 | orchestrator | Tuesday 13 January 2026 01:16:44 +0000 (0:00:00.315) 0:00:04.605 ******* 2026-01-13 01:16:55.066589 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:55.066597 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:16:55.066603 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:16:55.066607 | orchestrator | 2026-01-13 01:16:55.066610 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-01-13 01:16:55.066614 | orchestrator | Tuesday 13 January 2026 01:16:44 +0000 (0:00:00.286) 0:00:04.891 ******* 2026-01-13 01:16:55.066618 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:55.066622 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:16:55.066625 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:16:55.066629 | orchestrator | 2026-01-13 01:16:55.066633 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-13 01:16:55.066636 | orchestrator | Tuesday 13 January 2026 01:16:44 +0000 (0:00:00.511) 0:00:05.403 ******* 2026-01-13 01:16:55.066640 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:55.066644 | orchestrator | 2026-01-13 01:16:55.066648 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-13 01:16:55.066651 | orchestrator | Tuesday 13 January 2026 01:16:45 +0000 (0:00:00.263) 0:00:05.667 ******* 2026-01-13 01:16:55.066655 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:55.066659 | orchestrator | 2026-01-13 01:16:55.066663 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-13 01:16:55.066667 | orchestrator | Tuesday 13 January 2026 01:16:45 +0000 (0:00:00.245) 0:00:05.912 ******* 2026-01-13 01:16:55.066671 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:55.066674 | orchestrator | 2026-01-13 01:16:55.066678 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:16:55.066682 | orchestrator | Tuesday 13 January 2026 01:16:45 +0000 (0:00:00.234) 0:00:06.147 ******* 2026-01-13 01:16:55.066686 | orchestrator | 2026-01-13 01:16:55.066689 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:16:55.066693 | orchestrator | Tuesday 13 January 2026 01:16:45 +0000 (0:00:00.068) 0:00:06.215 ******* 2026-01-13 01:16:55.066697 | orchestrator | 2026-01-13 01:16:55.066700 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:16:55.066704 | orchestrator | Tuesday 13 January 2026 01:16:45 +0000 (0:00:00.068) 0:00:06.284 ******* 2026-01-13 01:16:55.066708 | orchestrator | 2026-01-13 01:16:55.066712 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-13 01:16:55.066716 | orchestrator | Tuesday 13 January 2026 01:16:45 +0000 (0:00:00.074) 0:00:06.359 ******* 2026-01-13 01:16:55.066720 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:55.066723 | orchestrator | 2026-01-13 01:16:55.066727 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-13 01:16:55.066731 | orchestrator | Tuesday 13 January 2026 01:16:46 +0000 (0:00:00.232) 0:00:06.592 ******* 2026-01-13 01:16:55.066735 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:55.066738 | orchestrator | 2026-01-13 01:16:55.066752 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-01-13 01:16:55.066756 | orchestrator | Tuesday 13 January 2026 01:16:46 +0000 (0:00:00.257) 0:00:06.850 ******* 2026-01-13 01:16:55.066760 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:55.066764 | orchestrator | 2026-01-13 01:16:55.066768 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-01-13 01:16:55.066771 | orchestrator | Tuesday 13 January 2026 01:16:46 +0000 (0:00:00.133) 0:00:06.983 ******* 2026-01-13 01:16:55.066775 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:16:55.066779 | orchestrator | 2026-01-13 01:16:55.066783 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-01-13 01:16:55.066786 | orchestrator | Tuesday 13 January 2026 01:16:48 +0000 (0:00:01.696) 0:00:08.680 ******* 2026-01-13 01:16:55.066790 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:55.066794 | orchestrator | 2026-01-13 01:16:55.066798 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-01-13 01:16:55.066834 | orchestrator | Tuesday 13 January 2026 01:16:48 +0000 (0:00:00.519) 0:00:09.200 ******* 2026-01-13 01:16:55.066838 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:55.066842 | orchestrator | 2026-01-13 01:16:55.066845 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-01-13 01:16:55.066849 | orchestrator | Tuesday 13 January 2026 01:16:48 +0000 (0:00:00.133) 0:00:09.333 ******* 2026-01-13 01:16:55.066853 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:55.066857 | orchestrator | 2026-01-13 01:16:55.066861 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-01-13 01:16:55.066867 | orchestrator | Tuesday 13 January 2026 01:16:49 +0000 (0:00:00.324) 0:00:09.658 ******* 2026-01-13 01:16:55.066875 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:55.066885 | orchestrator | 2026-01-13 01:16:55.066891 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-01-13 01:16:55.066897 | orchestrator | Tuesday 13 January 2026 01:16:49 +0000 (0:00:00.300) 0:00:09.958 ******* 2026-01-13 01:16:55.066903 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:55.066909 | orchestrator | 2026-01-13 01:16:55.066916 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-01-13 01:16:55.066921 | orchestrator | Tuesday 13 January 2026 01:16:49 +0000 (0:00:00.108) 0:00:10.067 ******* 2026-01-13 01:16:55.066927 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:55.066934 | orchestrator | 2026-01-13 01:16:55.066940 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-01-13 01:16:55.066947 | orchestrator | Tuesday 13 January 2026 01:16:49 +0000 (0:00:00.126) 0:00:10.193 ******* 2026-01-13 01:16:55.066953 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:55.066957 | orchestrator | 2026-01-13 01:16:55.066962 | orchestrator | TASK [Gather status data] ****************************************************** 2026-01-13 01:16:55.066967 | orchestrator | Tuesday 13 January 2026 01:16:49 +0000 (0:00:00.108) 0:00:10.301 ******* 2026-01-13 01:16:55.066971 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:16:55.066975 | orchestrator | 2026-01-13 01:16:55.066979 | orchestrator | TASK [Set health test data] **************************************************** 2026-01-13 01:16:55.066984 | orchestrator | Tuesday 13 January 2026 01:16:50 +0000 (0:00:01.243) 0:00:11.544 ******* 2026-01-13 01:16:55.066988 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:55.066992 | orchestrator | 2026-01-13 01:16:55.066997 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-01-13 01:16:55.067001 | orchestrator | Tuesday 13 January 2026 01:16:51 +0000 (0:00:00.298) 0:00:11.843 ******* 2026-01-13 01:16:55.067006 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:55.067010 | orchestrator | 2026-01-13 01:16:55.067015 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-01-13 01:16:55.067019 | orchestrator | Tuesday 13 January 2026 01:16:51 +0000 (0:00:00.153) 0:00:11.997 ******* 2026-01-13 01:16:55.067026 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:16:55.067034 | orchestrator | 2026-01-13 01:16:55.067044 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-01-13 01:16:55.067049 | orchestrator | Tuesday 13 January 2026 01:16:51 +0000 (0:00:00.145) 0:00:12.143 ******* 2026-01-13 01:16:55.067055 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:55.067060 | orchestrator | 2026-01-13 01:16:55.067066 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-01-13 01:16:55.067072 | orchestrator | Tuesday 13 January 2026 01:16:51 +0000 (0:00:00.326) 0:00:12.470 ******* 2026-01-13 01:16:55.067078 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:55.067083 | orchestrator | 2026-01-13 01:16:55.067089 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-13 01:16:55.067101 | orchestrator | Tuesday 13 January 2026 01:16:52 +0000 (0:00:00.149) 0:00:12.619 ******* 2026-01-13 01:16:55.067107 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-13 01:16:55.067113 | orchestrator | 2026-01-13 01:16:55.067119 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-13 01:16:55.067189 | orchestrator | Tuesday 13 January 2026 01:16:52 +0000 (0:00:00.239) 0:00:12.859 ******* 2026-01-13 01:16:55.067198 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:16:55.067205 | orchestrator | 2026-01-13 01:16:55.067213 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-13 01:16:55.067220 | orchestrator | Tuesday 13 January 2026 01:16:52 +0000 (0:00:00.234) 0:00:13.093 ******* 2026-01-13 01:16:55.067227 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-13 01:16:55.067232 | orchestrator | 2026-01-13 01:16:55.067236 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-13 01:16:55.067240 | orchestrator | Tuesday 13 January 2026 01:16:54 +0000 (0:00:01.750) 0:00:14.843 ******* 2026-01-13 01:16:55.067243 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-13 01:16:55.067247 | orchestrator | 2026-01-13 01:16:55.067251 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-13 01:16:55.067255 | orchestrator | Tuesday 13 January 2026 01:16:54 +0000 (0:00:00.268) 0:00:15.112 ******* 2026-01-13 01:16:55.067258 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-13 01:16:55.067262 | orchestrator | 2026-01-13 01:16:55.067272 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:16:57.678259 | orchestrator | Tuesday 13 January 2026 01:16:54 +0000 (0:00:00.258) 0:00:15.370 ******* 2026-01-13 01:16:57.678336 | orchestrator | 2026-01-13 01:16:57.678342 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:16:57.678348 | orchestrator | Tuesday 13 January 2026 01:16:54 +0000 (0:00:00.069) 0:00:15.439 ******* 2026-01-13 01:16:57.678352 | orchestrator | 2026-01-13 01:16:57.678356 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:16:57.678360 | orchestrator | Tuesday 13 January 2026 01:16:54 +0000 (0:00:00.084) 0:00:15.523 ******* 2026-01-13 01:16:57.678364 | orchestrator | 2026-01-13 01:16:57.678368 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-13 01:16:57.678372 | orchestrator | Tuesday 13 January 2026 01:16:55 +0000 (0:00:00.093) 0:00:15.617 ******* 2026-01-13 01:16:57.678377 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-13 01:16:57.678381 | orchestrator | 2026-01-13 01:16:57.678385 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-13 01:16:57.678389 | orchestrator | Tuesday 13 January 2026 01:16:56 +0000 (0:00:01.482) 0:00:17.099 ******* 2026-01-13 01:16:57.678392 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-13 01:16:57.678396 | orchestrator |  "msg": [ 2026-01-13 01:16:57.678401 | orchestrator |  "Validator run completed.", 2026-01-13 01:16:57.678419 | orchestrator |  "You can find the report file here:", 2026-01-13 01:16:57.678423 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-01-13T01:16:40+00:00-report.json", 2026-01-13 01:16:57.678428 | orchestrator |  "on the following host:", 2026-01-13 01:16:57.678432 | orchestrator |  "testbed-manager" 2026-01-13 01:16:57.678435 | orchestrator |  ] 2026-01-13 01:16:57.678439 | orchestrator | } 2026-01-13 01:16:57.678443 | orchestrator | 2026-01-13 01:16:57.678447 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:16:57.678452 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-13 01:16:57.678457 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 01:16:57.678462 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 01:16:57.678465 | orchestrator | 2026-01-13 01:16:57.678469 | orchestrator | 2026-01-13 01:16:57.678473 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:16:57.678491 | orchestrator | Tuesday 13 January 2026 01:16:57 +0000 (0:00:00.827) 0:00:17.926 ******* 2026-01-13 01:16:57.678495 | orchestrator | =============================================================================== 2026-01-13 01:16:57.678499 | orchestrator | Aggregate test results step one ----------------------------------------- 1.75s 2026-01-13 01:16:57.678503 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.70s 2026-01-13 01:16:57.678507 | orchestrator | Write report file ------------------------------------------------------- 1.48s 2026-01-13 01:16:57.678511 | orchestrator | Gather status data ------------------------------------------------------ 1.24s 2026-01-13 01:16:57.678515 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2026-01-13 01:16:57.678518 | orchestrator | Create report output directory ------------------------------------------ 0.93s 2026-01-13 01:16:57.678522 | orchestrator | Print report file information ------------------------------------------- 0.83s 2026-01-13 01:16:57.678526 | orchestrator | Get timestamp for report file ------------------------------------------- 0.78s 2026-01-13 01:16:57.678530 | orchestrator | Set quorum test data ---------------------------------------------------- 0.52s 2026-01-13 01:16:57.678533 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.51s 2026-01-13 01:16:57.678537 | orchestrator | Set test result to passed if container is existing ---------------------- 0.49s 2026-01-13 01:16:57.678541 | orchestrator | Fail cluster-health if health is not acceptable (strict) ---------------- 0.33s 2026-01-13 01:16:57.678544 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2026-01-13 01:16:57.678548 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2026-01-13 01:16:57.678552 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.30s 2026-01-13 01:16:57.678556 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2026-01-13 01:16:57.678559 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-01-13 01:16:57.678563 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.29s 2026-01-13 01:16:57.678568 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2026-01-13 01:16:57.678574 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-01-13 01:16:57.980311 | orchestrator | + osism validate ceph-mgrs 2026-01-13 01:17:23.607476 | orchestrator | 2026-01-13 01:17:23.607557 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-01-13 01:17:23.607569 | orchestrator | 2026-01-13 01:17:23.607578 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-13 01:17:23.607587 | orchestrator | Tuesday 13 January 2026 01:17:09 +0000 (0:00:00.432) 0:00:00.432 ******* 2026-01-13 01:17:23.607596 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-13 01:17:23.607603 | orchestrator | 2026-01-13 01:17:23.607611 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-13 01:17:23.607619 | orchestrator | Tuesday 13 January 2026 01:17:10 +0000 (0:00:00.792) 0:00:01.225 ******* 2026-01-13 01:17:23.607628 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-13 01:17:23.607636 | orchestrator | 2026-01-13 01:17:23.607643 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-13 01:17:23.607652 | orchestrator | Tuesday 13 January 2026 01:17:11 +0000 (0:00:01.036) 0:00:02.261 ******* 2026-01-13 01:17:23.607662 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:17:23.607681 | orchestrator | 2026-01-13 01:17:23.607693 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-13 01:17:23.607699 | orchestrator | Tuesday 13 January 2026 01:17:11 +0000 (0:00:00.130) 0:00:02.392 ******* 2026-01-13 01:17:23.607712 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:17:23.607717 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:17:23.607729 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:17:23.607734 | orchestrator | 2026-01-13 01:17:23.607768 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-13 01:17:23.607779 | orchestrator | Tuesday 13 January 2026 01:17:11 +0000 (0:00:00.286) 0:00:02.678 ******* 2026-01-13 01:17:23.607788 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:17:23.607796 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:17:23.607804 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:17:23.607812 | orchestrator | 2026-01-13 01:17:23.607820 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-13 01:17:23.607828 | orchestrator | Tuesday 13 January 2026 01:17:13 +0000 (0:00:01.251) 0:00:03.930 ******* 2026-01-13 01:17:23.607851 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:17:23.607860 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:17:23.607867 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:17:23.607874 | orchestrator | 2026-01-13 01:17:23.607882 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-13 01:17:23.607889 | orchestrator | Tuesday 13 January 2026 01:17:13 +0000 (0:00:00.273) 0:00:04.203 ******* 2026-01-13 01:17:23.607897 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:17:23.607905 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:17:23.607913 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:17:23.607921 | orchestrator | 2026-01-13 01:17:23.607929 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-13 01:17:23.607937 | orchestrator | Tuesday 13 January 2026 01:17:13 +0000 (0:00:00.471) 0:00:04.675 ******* 2026-01-13 01:17:23.607946 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:17:23.607954 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:17:23.607961 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:17:23.607969 | orchestrator | 2026-01-13 01:17:23.607978 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-01-13 01:17:23.607987 | orchestrator | Tuesday 13 January 2026 01:17:14 +0000 (0:00:00.295) 0:00:04.970 ******* 2026-01-13 01:17:23.607994 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:17:23.608002 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:17:23.608010 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:17:23.608017 | orchestrator | 2026-01-13 01:17:23.608022 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-01-13 01:17:23.608027 | orchestrator | Tuesday 13 January 2026 01:17:14 +0000 (0:00:00.282) 0:00:05.252 ******* 2026-01-13 01:17:23.608031 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:17:23.608036 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:17:23.608042 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:17:23.608048 | orchestrator | 2026-01-13 01:17:23.608056 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-13 01:17:23.608067 | orchestrator | Tuesday 13 January 2026 01:17:14 +0000 (0:00:00.494) 0:00:05.747 ******* 2026-01-13 01:17:23.608078 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:17:23.608086 | orchestrator | 2026-01-13 01:17:23.608120 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-13 01:17:23.608129 | orchestrator | Tuesday 13 January 2026 01:17:15 +0000 (0:00:00.238) 0:00:05.985 ******* 2026-01-13 01:17:23.608136 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:17:23.608144 | orchestrator | 2026-01-13 01:17:23.608152 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-13 01:17:23.608160 | orchestrator | Tuesday 13 January 2026 01:17:15 +0000 (0:00:00.284) 0:00:06.270 ******* 2026-01-13 01:17:23.608167 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:17:23.608175 | orchestrator | 2026-01-13 01:17:23.608183 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:17:23.608190 | orchestrator | Tuesday 13 January 2026 01:17:15 +0000 (0:00:00.251) 0:00:06.521 ******* 2026-01-13 01:17:23.608197 | orchestrator | 2026-01-13 01:17:23.608205 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:17:23.608213 | orchestrator | Tuesday 13 January 2026 01:17:15 +0000 (0:00:00.071) 0:00:06.593 ******* 2026-01-13 01:17:23.608221 | orchestrator | 2026-01-13 01:17:23.608229 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:17:23.608247 | orchestrator | Tuesday 13 January 2026 01:17:15 +0000 (0:00:00.070) 0:00:06.663 ******* 2026-01-13 01:17:23.608255 | orchestrator | 2026-01-13 01:17:23.608262 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-13 01:17:23.608270 | orchestrator | Tuesday 13 January 2026 01:17:15 +0000 (0:00:00.076) 0:00:06.740 ******* 2026-01-13 01:17:23.608278 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:17:23.608285 | orchestrator | 2026-01-13 01:17:23.608293 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-13 01:17:23.608301 | orchestrator | Tuesday 13 January 2026 01:17:16 +0000 (0:00:00.245) 0:00:06.986 ******* 2026-01-13 01:17:23.608308 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:17:23.608315 | orchestrator | 2026-01-13 01:17:23.608341 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-01-13 01:17:23.608350 | orchestrator | Tuesday 13 January 2026 01:17:16 +0000 (0:00:00.253) 0:00:07.239 ******* 2026-01-13 01:17:23.608357 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:17:23.608365 | orchestrator | 2026-01-13 01:17:23.608373 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-01-13 01:17:23.608381 | orchestrator | Tuesday 13 January 2026 01:17:16 +0000 (0:00:00.115) 0:00:07.355 ******* 2026-01-13 01:17:23.608388 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:17:23.608397 | orchestrator | 2026-01-13 01:17:23.608404 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-01-13 01:17:23.608411 | orchestrator | Tuesday 13 January 2026 01:17:18 +0000 (0:00:01.834) 0:00:09.189 ******* 2026-01-13 01:17:23.608417 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:17:23.608425 | orchestrator | 2026-01-13 01:17:23.608432 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-01-13 01:17:23.608439 | orchestrator | Tuesday 13 January 2026 01:17:18 +0000 (0:00:00.427) 0:00:09.616 ******* 2026-01-13 01:17:23.608446 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:17:23.608454 | orchestrator | 2026-01-13 01:17:23.608462 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-01-13 01:17:23.608469 | orchestrator | Tuesday 13 January 2026 01:17:19 +0000 (0:00:00.327) 0:00:09.944 ******* 2026-01-13 01:17:23.608477 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:17:23.608484 | orchestrator | 2026-01-13 01:17:23.608492 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-01-13 01:17:23.608498 | orchestrator | Tuesday 13 January 2026 01:17:19 +0000 (0:00:00.124) 0:00:10.069 ******* 2026-01-13 01:17:23.608505 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:17:23.608512 | orchestrator | 2026-01-13 01:17:23.608519 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-13 01:17:23.608527 | orchestrator | Tuesday 13 January 2026 01:17:19 +0000 (0:00:00.154) 0:00:10.223 ******* 2026-01-13 01:17:23.608534 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-13 01:17:23.608541 | orchestrator | 2026-01-13 01:17:23.608550 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-13 01:17:23.608557 | orchestrator | Tuesday 13 January 2026 01:17:19 +0000 (0:00:00.249) 0:00:10.472 ******* 2026-01-13 01:17:23.608565 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:17:23.608572 | orchestrator | 2026-01-13 01:17:23.608579 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-13 01:17:23.608588 | orchestrator | Tuesday 13 January 2026 01:17:19 +0000 (0:00:00.232) 0:00:10.704 ******* 2026-01-13 01:17:23.608595 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-13 01:17:23.608603 | orchestrator | 2026-01-13 01:17:23.608621 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-13 01:17:23.608629 | orchestrator | Tuesday 13 January 2026 01:17:20 +0000 (0:00:01.211) 0:00:11.915 ******* 2026-01-13 01:17:23.608636 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-13 01:17:23.608643 | orchestrator | 2026-01-13 01:17:23.608659 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-13 01:17:23.608666 | orchestrator | Tuesday 13 January 2026 01:17:21 +0000 (0:00:00.267) 0:00:12.182 ******* 2026-01-13 01:17:23.608674 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-13 01:17:23.608681 | orchestrator | 2026-01-13 01:17:23.608689 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:17:23.608697 | orchestrator | Tuesday 13 January 2026 01:17:21 +0000 (0:00:00.239) 0:00:12.423 ******* 2026-01-13 01:17:23.608704 | orchestrator | 2026-01-13 01:17:23.608737 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:17:23.608746 | orchestrator | Tuesday 13 January 2026 01:17:21 +0000 (0:00:00.068) 0:00:12.491 ******* 2026-01-13 01:17:23.608753 | orchestrator | 2026-01-13 01:17:23.608761 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:17:23.608768 | orchestrator | Tuesday 13 January 2026 01:17:21 +0000 (0:00:00.067) 0:00:12.559 ******* 2026-01-13 01:17:23.608775 | orchestrator | 2026-01-13 01:17:23.608782 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-13 01:17:23.608790 | orchestrator | Tuesday 13 January 2026 01:17:21 +0000 (0:00:00.255) 0:00:12.815 ******* 2026-01-13 01:17:23.608797 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-13 01:17:23.608805 | orchestrator | 2026-01-13 01:17:23.608813 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-13 01:17:23.608821 | orchestrator | Tuesday 13 January 2026 01:17:23 +0000 (0:00:01.299) 0:00:14.115 ******* 2026-01-13 01:17:23.608828 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-13 01:17:23.608836 | orchestrator |  "msg": [ 2026-01-13 01:17:23.608845 | orchestrator |  "Validator run completed.", 2026-01-13 01:17:23.608853 | orchestrator |  "You can find the report file here:", 2026-01-13 01:17:23.608861 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-01-13T01:17:10+00:00-report.json", 2026-01-13 01:17:23.608870 | orchestrator |  "on the following host:", 2026-01-13 01:17:23.608878 | orchestrator |  "testbed-manager" 2026-01-13 01:17:23.608886 | orchestrator |  ] 2026-01-13 01:17:23.608894 | orchestrator | } 2026-01-13 01:17:23.608901 | orchestrator | 2026-01-13 01:17:23.608908 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:17:23.608917 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-13 01:17:23.608926 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 01:17:23.608946 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 01:17:23.954456 | orchestrator | 2026-01-13 01:17:23.954526 | orchestrator | 2026-01-13 01:17:23.954534 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:17:23.954540 | orchestrator | Tuesday 13 January 2026 01:17:23 +0000 (0:00:00.403) 0:00:14.518 ******* 2026-01-13 01:17:23.954544 | orchestrator | =============================================================================== 2026-01-13 01:17:23.954548 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.83s 2026-01-13 01:17:23.954552 | orchestrator | Write report file ------------------------------------------------------- 1.30s 2026-01-13 01:17:23.954556 | orchestrator | Get container info ------------------------------------------------------ 1.25s 2026-01-13 01:17:23.954560 | orchestrator | Aggregate test results step one ----------------------------------------- 1.21s 2026-01-13 01:17:23.954564 | orchestrator | Create report output directory ------------------------------------------ 1.04s 2026-01-13 01:17:23.954568 | orchestrator | Get timestamp for report file ------------------------------------------- 0.79s 2026-01-13 01:17:23.954571 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.49s 2026-01-13 01:17:23.954593 | orchestrator | Set test result to passed if container is existing ---------------------- 0.47s 2026-01-13 01:17:23.954597 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.43s 2026-01-13 01:17:23.954601 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-01-13 01:17:23.954605 | orchestrator | Flush handlers ---------------------------------------------------------- 0.39s 2026-01-13 01:17:23.954609 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.33s 2026-01-13 01:17:23.954612 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-01-13 01:17:23.954616 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2026-01-13 01:17:23.954629 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-01-13 01:17:23.954633 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.28s 2026-01-13 01:17:23.954638 | orchestrator | Set test result to failed if container is missing ----------------------- 0.27s 2026-01-13 01:17:23.954642 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-01-13 01:17:23.954645 | orchestrator | Fail due to missing containers ------------------------------------------ 0.25s 2026-01-13 01:17:23.954649 | orchestrator | Aggregate test results step three --------------------------------------- 0.25s 2026-01-13 01:17:24.332736 | orchestrator | + osism validate ceph-osds 2026-01-13 01:17:45.279159 | orchestrator | 2026-01-13 01:17:45.279266 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-01-13 01:17:45.279278 | orchestrator | 2026-01-13 01:17:45.279286 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-13 01:17:45.279292 | orchestrator | Tuesday 13 January 2026 01:17:40 +0000 (0:00:00.408) 0:00:00.408 ******* 2026-01-13 01:17:45.279300 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-13 01:17:45.279306 | orchestrator | 2026-01-13 01:17:45.279311 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-13 01:17:45.279318 | orchestrator | Tuesday 13 January 2026 01:17:41 +0000 (0:00:00.808) 0:00:01.216 ******* 2026-01-13 01:17:45.279325 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-13 01:17:45.279331 | orchestrator | 2026-01-13 01:17:45.279337 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-13 01:17:45.279343 | orchestrator | Tuesday 13 January 2026 01:17:42 +0000 (0:00:00.499) 0:00:01.716 ******* 2026-01-13 01:17:45.279349 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-13 01:17:45.279355 | orchestrator | 2026-01-13 01:17:45.279362 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-13 01:17:45.279368 | orchestrator | Tuesday 13 January 2026 01:17:42 +0000 (0:00:00.669) 0:00:02.386 ******* 2026-01-13 01:17:45.279375 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:45.279383 | orchestrator | 2026-01-13 01:17:45.279390 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-13 01:17:45.279395 | orchestrator | Tuesday 13 January 2026 01:17:43 +0000 (0:00:00.117) 0:00:02.504 ******* 2026-01-13 01:17:45.279400 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:45.279404 | orchestrator | 2026-01-13 01:17:45.279408 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-13 01:17:45.279412 | orchestrator | Tuesday 13 January 2026 01:17:43 +0000 (0:00:00.130) 0:00:02.634 ******* 2026-01-13 01:17:45.279416 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:45.279420 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:17:45.279424 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:17:45.279428 | orchestrator | 2026-01-13 01:17:45.279432 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-13 01:17:45.279435 | orchestrator | Tuesday 13 January 2026 01:17:43 +0000 (0:00:00.301) 0:00:02.936 ******* 2026-01-13 01:17:45.279439 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:45.279462 | orchestrator | 2026-01-13 01:17:45.279466 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-13 01:17:45.279469 | orchestrator | Tuesday 13 January 2026 01:17:43 +0000 (0:00:00.142) 0:00:03.078 ******* 2026-01-13 01:17:45.279473 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:45.279477 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:17:45.279481 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:17:45.279484 | orchestrator | 2026-01-13 01:17:45.279488 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-01-13 01:17:45.279492 | orchestrator | Tuesday 13 January 2026 01:17:43 +0000 (0:00:00.312) 0:00:03.391 ******* 2026-01-13 01:17:45.279496 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:45.279499 | orchestrator | 2026-01-13 01:17:45.279503 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-13 01:17:45.279507 | orchestrator | Tuesday 13 January 2026 01:17:44 +0000 (0:00:00.601) 0:00:03.992 ******* 2026-01-13 01:17:45.279510 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:45.279514 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:17:45.279518 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:17:45.279522 | orchestrator | 2026-01-13 01:17:45.279526 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-01-13 01:17:45.279530 | orchestrator | Tuesday 13 January 2026 01:17:45 +0000 (0:00:00.480) 0:00:04.472 ******* 2026-01-13 01:17:45.279535 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd3a78fcb007e9c9cf8beb00c41c41182ff4f7141b070b40787de7ec0e5920240', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-13 01:17:45.279541 | orchestrator | skipping: [testbed-node-3] => (item={'id': '74ca70dc77d6f055dc1977d50acf32ef5a65859a316ff150c7e956e8ce8f113e', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-13 01:17:45.279548 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0f057698e43a39bb26c65a70895321e4c94977d93367f4a34044d6235e50a35d', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-13 01:17:45.279553 | orchestrator | skipping: [testbed-node-3] => (item={'id': '501c2d37b004c04e960539846b79750a31df18d4984494bccb3d321f67b12355', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-13 01:17:45.279576 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fc81a65026c293243876f3a037c928c9e8302c714721c298c36dc8d11fef9f2c', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-13 01:17:45.279598 | orchestrator | skipping: [testbed-node-3] => (item={'id': '16f8e6164b65c1962d79a58cc1df74729f2818837581bec4bb9fd32af3d46c0d', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-01-13 01:17:45.279604 | orchestrator | skipping: [testbed-node-3] => (item={'id': '77c17e344c751f33b4e619cd49d30ffb165185980a41f698ee4acbf212a87236', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2026-01-13 01:17:45.279610 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ac50de9f26bbe3e9c8ff977156ab573314956294e9d033aabfec97231c5de069', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-01-13 01:17:45.279617 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'dbf500f7855dadb661620cb1f20e7a118a01cee114c3768e9b58a9de098a0951', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-01-13 01:17:45.279630 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'dcc4c3b0e0ce368d05ffcfe9a38a6a63dbd36bfc0220a82a31c783b587b0d12b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-13 01:17:45.279635 | orchestrator | ok: [testbed-node-3] => (item={'id': '9ff012654489585a85c16e6a37432be319b40455fe8df31375d59993386344b0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-13 01:17:45.279639 | orchestrator | ok: [testbed-node-3] => (item={'id': 'add6cf9e8ba3561a450d02e573c16e4ec90b99d2969871fd5d8194a7d854fb6a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-13 01:17:45.279643 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8c034d9af798b7f0d7980314c29c6fa6f1c01c9ad236cf05c53057267ff850ae', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-01-13 01:17:45.279648 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0476ed090495228a22c133fa90cf3661a6ce0a9edad3173504a702edd4a6f2e7', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2026-01-13 01:17:45.279655 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c24bd64d85708f35ec665c98850bf1b7a8be7d31dd6eb78d04f07ea294c03ebc', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2026-01-13 01:17:45.279660 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e6956e22e953a73a77428962cadb3516942f8cbfd1c7f1e8b8bacd3e2e5d8896', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-13 01:17:45.279665 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0a389f80a55f73a57fc38c284bd5b4817cdefaac8ea27b5acb2311f16110e7a7', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-13 01:17:45.279670 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ff37babf5b9ed1b3a668f1c5b88f9966d8266abd23565e787200476559530c29', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-13 01:17:45.279675 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1de249b56223bcefe828864482a846e0ef2aed56db7e4b7fe0b3f301a8e6a6b6', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-13 01:17:45.279682 | orchestrator | skipping: [testbed-node-4] => (item={'id': '203cae7e9471b4926a0ba5cca1e546eab5204e49a478a8d214ce0305c698ac24', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-13 01:17:45.279695 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8ee1be56a0b59d70014e7aacbdd8c7b3e88ef9c11ac1a606f8c66be0478d4d24', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-13 01:17:45.279708 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6641d8b49495231b8e1b3f96ca8ee6c0dc613562be2f4df89b8f1a5058b21511', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-13 01:17:45.513991 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0eea71216deabba3ffac7a2f099e5eaddb2e82c95eeab999562b2cfaf89b061e', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-13 01:17:45.514221 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c3c80c73b6070420d84aee4a793e8abf74aec8f5c2ba8b2ac3f84a94c546ed32', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-01-13 01:17:45.514235 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ef7e548b1eb7fea6e0b310c2dec11236a0162d47a8f555e3963a86a4657d1f01', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2026-01-13 01:17:45.514240 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c77125e3250d11076354289b7976a700b13354225538c4c521c2e1e903bf8e94', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-01-13 01:17:45.514244 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3ab45e01af15fa611a229b1c189e995bce34019640c57b4b2f6a6cf1b16745c5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-01-13 01:17:45.514249 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dd0d63e50b80ccb93e2755428dc7a9d03e7fbfd8406794efafa36fe04fd2d261', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-13 01:17:45.514255 | orchestrator | ok: [testbed-node-4] => (item={'id': '70a450169183dee9c5f573c57a313c5d7dc19d008fd64ce40f51cfacb51d2998', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-13 01:17:45.514260 | orchestrator | ok: [testbed-node-4] => (item={'id': 'd7145a18aee01206e1044557c942e7512c26d36b8ea41dd3bd90dfa1375c494a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-13 01:17:45.514264 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0cda99d5c6691e323e9278321577a2ef44fbe4cd1728ce9f9e010ec19aeb23e9', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-01-13 01:17:45.514268 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e33387bfa156e99b01fde1badacac8ccb7c1e86de42dee06ae8f6248329f3c13', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2026-01-13 01:17:45.514272 | orchestrator | skipping: [testbed-node-4] => (item={'id': '25010fc9c54f4efff56dd3d7d36bbbb6978a3d3b714a88e824d15b9670cab64e', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2026-01-13 01:17:45.514277 | orchestrator | skipping: [testbed-node-4] => (item={'id': '296655f9405254e03e3e41af4b0c690c30cb7444585d0a8594379f81d663231b', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-13 01:17:45.514281 | orchestrator | skipping: [testbed-node-4] => (item={'id': '20aee76fafcd6639019c80ed9e8e7410768973fbe6af6e91304b80ee51c6eb42', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-13 01:17:45.514296 | orchestrator | skipping: [testbed-node-4] => (item={'id': '761948de6bc91d66b89370e7d945883288207c20eb969377c91dae379d7edd50', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-13 01:17:45.514303 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b328d61fdf17323e257ebfd30e1bad731e307c21fecc881c0d2b0b7dbf9b598b', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-13 01:17:45.514323 | orchestrator | skipping: [testbed-node-5] => (item={'id': '23ab7758342847ebd9fb62766ebc2485a7160fc866346d1eefa9cd7ee949d04f', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-13 01:17:45.514337 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3ab1060c447cc5406468aef271af340002df988de2302a437d5bb4223e809690', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-13 01:17:45.514344 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0665162f52b763b5c76931e9af0ac5ddca90c79858155af311da5dec36ffae49', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-13 01:17:45.514350 | orchestrator | skipping: [testbed-node-5] => (item={'id': '98b346020f62df488d4e839711b116281e3169623da33161f52db7bf96da431f', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2026-01-13 01:17:45.514356 | orchestrator | skipping: [testbed-node-5] => (item={'id': '98dbddf37d2c7c907799c21d6de41884478c4765dce765b1cd20068da70f3006', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-01-13 01:17:45.514360 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4245043ad1a72dbc1b0fd2df2eaf5ca0ebdfb611e01db3906f638ed40bd9d3e0', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2026-01-13 01:17:45.514364 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9e4bc82239fc81ff290bffbcf7402743ef3d5caacee719ffd53a7d64922b653f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-01-13 01:17:45.514367 | orchestrator | skipping: [testbed-node-5] => (item={'id': '35a21964a68f180972a4493eee94d2297773e4641b4a8b5fc45aa524d89c0331', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-01-13 01:17:45.514371 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1e933cbe1b7199bdb6c7dc8be19f3f068c99fe685ce5ca443ba0040fd5129de7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-13 01:17:45.514375 | orchestrator | ok: [testbed-node-5] => (item={'id': '5b124b584b96f6e367fb9719c890f8fe2a7423c3b676041d3b94fe8c9864fac8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-13 01:17:45.514379 | orchestrator | ok: [testbed-node-5] => (item={'id': '665fd7911952682cc2e6c6c1adc4e90293e5a58e0dc18cbdc6f89c6689f2ea75', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-13 01:17:45.514385 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dded726ea7cd20156756fac7346c2c27432e38b646e04072a7add8bd5f9bce38', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-01-13 01:17:45.514391 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fea967d64cbb6e2e73642576dc14f4c869b4afcc8691d2d277c84322fe2d1d3a', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2026-01-13 01:17:45.514401 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9c6af98f19ef3213321fa58afcf4a625c5bcb6557e92f5a59f835066551aa5a6', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2026-01-13 01:17:45.514423 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3890e7b35cfca588e566c42f4a3a874bf3e39e84e26e89b2a52a088438fbcfac', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-13 01:17:45.514440 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8d0fd15f8d0ca6eda9ff73dab70c2224170befd946471b96881cf334b7b18763', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-13 01:17:45.514454 | orchestrator | skipping: [testbed-node-5] => (item={'id': '14dfc3f860f2cfff4737b8970309e3f5d84a8d27141a4b67b7aeb9c653936983', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-13 01:17:59.126863 | orchestrator | 2026-01-13 01:17:59.126953 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-01-13 01:17:59.126966 | orchestrator | Tuesday 13 January 2026 01:17:45 +0000 (0:00:00.480) 0:00:04.953 ******* 2026-01-13 01:17:59.126973 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:59.126980 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:17:59.126986 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:17:59.126992 | orchestrator | 2026-01-13 01:17:59.126998 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-01-13 01:17:59.127002 | orchestrator | Tuesday 13 January 2026 01:17:45 +0000 (0:00:00.301) 0:00:05.254 ******* 2026-01-13 01:17:59.127006 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:59.127011 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:17:59.127014 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:17:59.127018 | orchestrator | 2026-01-13 01:17:59.127022 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-01-13 01:17:59.127026 | orchestrator | Tuesday 13 January 2026 01:17:46 +0000 (0:00:00.494) 0:00:05.749 ******* 2026-01-13 01:17:59.127030 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:59.127034 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:17:59.127037 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:17:59.127084 | orchestrator | 2026-01-13 01:17:59.127089 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-13 01:17:59.127093 | orchestrator | Tuesday 13 January 2026 01:17:46 +0000 (0:00:00.286) 0:00:06.035 ******* 2026-01-13 01:17:59.127097 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:59.127100 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:17:59.127104 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:17:59.127108 | orchestrator | 2026-01-13 01:17:59.127112 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-01-13 01:17:59.127116 | orchestrator | Tuesday 13 January 2026 01:17:46 +0000 (0:00:00.276) 0:00:06.312 ******* 2026-01-13 01:17:59.127120 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-01-13 01:17:59.127125 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-01-13 01:17:59.127129 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:59.127133 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-01-13 01:17:59.127136 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-01-13 01:17:59.127140 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:17:59.127144 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-01-13 01:17:59.127148 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-01-13 01:17:59.127151 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:17:59.127155 | orchestrator | 2026-01-13 01:17:59.127159 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-01-13 01:17:59.127163 | orchestrator | Tuesday 13 January 2026 01:17:47 +0000 (0:00:00.300) 0:00:06.612 ******* 2026-01-13 01:17:59.127187 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:59.127191 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:17:59.127195 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:17:59.127199 | orchestrator | 2026-01-13 01:17:59.127203 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-13 01:17:59.127206 | orchestrator | Tuesday 13 January 2026 01:17:47 +0000 (0:00:00.473) 0:00:07.086 ******* 2026-01-13 01:17:59.127210 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:59.127214 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:17:59.127218 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:17:59.127221 | orchestrator | 2026-01-13 01:17:59.127225 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-13 01:17:59.127229 | orchestrator | Tuesday 13 January 2026 01:17:47 +0000 (0:00:00.294) 0:00:07.381 ******* 2026-01-13 01:17:59.127233 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:59.127236 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:17:59.127240 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:17:59.127244 | orchestrator | 2026-01-13 01:17:59.127247 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-01-13 01:17:59.127251 | orchestrator | Tuesday 13 January 2026 01:17:48 +0000 (0:00:00.275) 0:00:07.656 ******* 2026-01-13 01:17:59.127255 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:59.127259 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:17:59.127263 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:17:59.127267 | orchestrator | 2026-01-13 01:17:59.127270 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-13 01:17:59.127274 | orchestrator | Tuesday 13 January 2026 01:17:48 +0000 (0:00:00.277) 0:00:07.933 ******* 2026-01-13 01:17:59.127278 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:59.127282 | orchestrator | 2026-01-13 01:17:59.127286 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-13 01:17:59.127290 | orchestrator | Tuesday 13 January 2026 01:17:48 +0000 (0:00:00.470) 0:00:08.404 ******* 2026-01-13 01:17:59.127293 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:59.127297 | orchestrator | 2026-01-13 01:17:59.127301 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-13 01:17:59.127305 | orchestrator | Tuesday 13 January 2026 01:17:49 +0000 (0:00:00.626) 0:00:09.030 ******* 2026-01-13 01:17:59.127309 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:59.127312 | orchestrator | 2026-01-13 01:17:59.127316 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:17:59.127320 | orchestrator | Tuesday 13 January 2026 01:17:49 +0000 (0:00:00.231) 0:00:09.261 ******* 2026-01-13 01:17:59.127324 | orchestrator | 2026-01-13 01:17:59.127328 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:17:59.127332 | orchestrator | Tuesday 13 January 2026 01:17:49 +0000 (0:00:00.068) 0:00:09.329 ******* 2026-01-13 01:17:59.127335 | orchestrator | 2026-01-13 01:17:59.127339 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:17:59.127355 | orchestrator | Tuesday 13 January 2026 01:17:49 +0000 (0:00:00.080) 0:00:09.410 ******* 2026-01-13 01:17:59.127359 | orchestrator | 2026-01-13 01:17:59.127363 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-13 01:17:59.127367 | orchestrator | Tuesday 13 January 2026 01:17:50 +0000 (0:00:00.083) 0:00:09.493 ******* 2026-01-13 01:17:59.127371 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:59.127374 | orchestrator | 2026-01-13 01:17:59.127378 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-01-13 01:17:59.127382 | orchestrator | Tuesday 13 January 2026 01:17:50 +0000 (0:00:00.251) 0:00:09.744 ******* 2026-01-13 01:17:59.127386 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:59.127389 | orchestrator | 2026-01-13 01:17:59.127393 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-13 01:17:59.127397 | orchestrator | Tuesday 13 January 2026 01:17:50 +0000 (0:00:00.238) 0:00:09.983 ******* 2026-01-13 01:17:59.127404 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:59.127408 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:17:59.127412 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:17:59.127416 | orchestrator | 2026-01-13 01:17:59.127451 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-01-13 01:17:59.127456 | orchestrator | Tuesday 13 January 2026 01:17:50 +0000 (0:00:00.290) 0:00:10.274 ******* 2026-01-13 01:17:59.127460 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:59.127465 | orchestrator | 2026-01-13 01:17:59.127469 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-01-13 01:17:59.127474 | orchestrator | Tuesday 13 January 2026 01:17:51 +0000 (0:00:00.239) 0:00:10.513 ******* 2026-01-13 01:17:59.127478 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-13 01:17:59.127482 | orchestrator | 2026-01-13 01:17:59.127487 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-01-13 01:17:59.127491 | orchestrator | Tuesday 13 January 2026 01:17:53 +0000 (0:00:02.142) 0:00:12.655 ******* 2026-01-13 01:17:59.127496 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:59.127500 | orchestrator | 2026-01-13 01:17:59.127504 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-01-13 01:17:59.127509 | orchestrator | Tuesday 13 January 2026 01:17:53 +0000 (0:00:00.132) 0:00:12.787 ******* 2026-01-13 01:17:59.127513 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:59.127517 | orchestrator | 2026-01-13 01:17:59.127522 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-01-13 01:17:59.127526 | orchestrator | Tuesday 13 January 2026 01:17:53 +0000 (0:00:00.289) 0:00:13.077 ******* 2026-01-13 01:17:59.127531 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:59.127535 | orchestrator | 2026-01-13 01:17:59.127539 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-01-13 01:17:59.127544 | orchestrator | Tuesday 13 January 2026 01:17:53 +0000 (0:00:00.121) 0:00:13.198 ******* 2026-01-13 01:17:59.127548 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:59.127552 | orchestrator | 2026-01-13 01:17:59.127557 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-13 01:17:59.127561 | orchestrator | Tuesday 13 January 2026 01:17:53 +0000 (0:00:00.108) 0:00:13.306 ******* 2026-01-13 01:17:59.127565 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:59.127570 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:17:59.127574 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:17:59.127578 | orchestrator | 2026-01-13 01:17:59.127583 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-01-13 01:17:59.127587 | orchestrator | Tuesday 13 January 2026 01:17:54 +0000 (0:00:00.281) 0:00:13.588 ******* 2026-01-13 01:17:59.127592 | orchestrator | changed: [testbed-node-3] 2026-01-13 01:17:59.127596 | orchestrator | changed: [testbed-node-4] 2026-01-13 01:17:59.127600 | orchestrator | changed: [testbed-node-5] 2026-01-13 01:17:59.127604 | orchestrator | 2026-01-13 01:17:59.127609 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-01-13 01:17:59.127613 | orchestrator | Tuesday 13 January 2026 01:17:56 +0000 (0:00:02.739) 0:00:16.328 ******* 2026-01-13 01:17:59.127618 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:59.127622 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:17:59.127626 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:17:59.127631 | orchestrator | 2026-01-13 01:17:59.127635 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-01-13 01:17:59.127639 | orchestrator | Tuesday 13 January 2026 01:17:57 +0000 (0:00:00.484) 0:00:16.812 ******* 2026-01-13 01:17:59.127643 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:59.127648 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:17:59.127652 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:17:59.127656 | orchestrator | 2026-01-13 01:17:59.127661 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-01-13 01:17:59.127665 | orchestrator | Tuesday 13 January 2026 01:17:57 +0000 (0:00:00.478) 0:00:17.291 ******* 2026-01-13 01:17:59.127673 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:59.127680 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:17:59.127686 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:17:59.127693 | orchestrator | 2026-01-13 01:17:59.127702 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-01-13 01:17:59.127708 | orchestrator | Tuesday 13 January 2026 01:17:58 +0000 (0:00:00.290) 0:00:17.581 ******* 2026-01-13 01:17:59.127714 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:17:59.127722 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:17:59.127728 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:17:59.127734 | orchestrator | 2026-01-13 01:17:59.127741 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-01-13 01:17:59.127748 | orchestrator | Tuesday 13 January 2026 01:17:58 +0000 (0:00:00.461) 0:00:18.043 ******* 2026-01-13 01:17:59.127755 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:59.127761 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:17:59.127768 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:17:59.127773 | orchestrator | 2026-01-13 01:17:59.127778 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-01-13 01:17:59.127782 | orchestrator | Tuesday 13 January 2026 01:17:58 +0000 (0:00:00.276) 0:00:18.319 ******* 2026-01-13 01:17:59.127787 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:17:59.127791 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:17:59.127796 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:17:59.127800 | orchestrator | 2026-01-13 01:17:59.127808 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-13 01:18:06.596153 | orchestrator | Tuesday 13 January 2026 01:17:59 +0000 (0:00:00.263) 0:00:18.583 ******* 2026-01-13 01:18:06.596260 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:18:06.596271 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:18:06.596278 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:18:06.596284 | orchestrator | 2026-01-13 01:18:06.596290 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-01-13 01:18:06.596296 | orchestrator | Tuesday 13 January 2026 01:17:59 +0000 (0:00:00.547) 0:00:19.130 ******* 2026-01-13 01:18:06.596303 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:18:06.596309 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:18:06.596315 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:18:06.596321 | orchestrator | 2026-01-13 01:18:06.596327 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-01-13 01:18:06.596334 | orchestrator | Tuesday 13 January 2026 01:18:00 +0000 (0:00:00.903) 0:00:20.034 ******* 2026-01-13 01:18:06.596341 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:18:06.596347 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:18:06.596353 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:18:06.596359 | orchestrator | 2026-01-13 01:18:06.596365 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-01-13 01:18:06.596371 | orchestrator | Tuesday 13 January 2026 01:18:00 +0000 (0:00:00.296) 0:00:20.331 ******* 2026-01-13 01:18:06.596377 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:18:06.596385 | orchestrator | skipping: [testbed-node-4] 2026-01-13 01:18:06.596391 | orchestrator | skipping: [testbed-node-5] 2026-01-13 01:18:06.596398 | orchestrator | 2026-01-13 01:18:06.596404 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-01-13 01:18:06.596411 | orchestrator | Tuesday 13 January 2026 01:18:01 +0000 (0:00:00.268) 0:00:20.599 ******* 2026-01-13 01:18:06.596418 | orchestrator | ok: [testbed-node-3] 2026-01-13 01:18:06.596425 | orchestrator | ok: [testbed-node-4] 2026-01-13 01:18:06.596431 | orchestrator | ok: [testbed-node-5] 2026-01-13 01:18:06.596437 | orchestrator | 2026-01-13 01:18:06.596444 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-13 01:18:06.596449 | orchestrator | Tuesday 13 January 2026 01:18:01 +0000 (0:00:00.287) 0:00:20.887 ******* 2026-01-13 01:18:06.596456 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-13 01:18:06.596462 | orchestrator | 2026-01-13 01:18:06.596493 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-13 01:18:06.596499 | orchestrator | Tuesday 13 January 2026 01:18:01 +0000 (0:00:00.234) 0:00:21.122 ******* 2026-01-13 01:18:06.596505 | orchestrator | skipping: [testbed-node-3] 2026-01-13 01:18:06.596511 | orchestrator | 2026-01-13 01:18:06.596517 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-13 01:18:06.596523 | orchestrator | Tuesday 13 January 2026 01:18:02 +0000 (0:00:00.694) 0:00:21.816 ******* 2026-01-13 01:18:06.596529 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-13 01:18:06.596535 | orchestrator | 2026-01-13 01:18:06.596541 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-13 01:18:06.596548 | orchestrator | Tuesday 13 January 2026 01:18:03 +0000 (0:00:01.512) 0:00:23.329 ******* 2026-01-13 01:18:06.596554 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-13 01:18:06.596560 | orchestrator | 2026-01-13 01:18:06.596566 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-13 01:18:06.596573 | orchestrator | Tuesday 13 January 2026 01:18:04 +0000 (0:00:00.278) 0:00:23.607 ******* 2026-01-13 01:18:06.596579 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-13 01:18:06.596586 | orchestrator | 2026-01-13 01:18:06.596592 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:18:06.596598 | orchestrator | Tuesday 13 January 2026 01:18:04 +0000 (0:00:00.230) 0:00:23.838 ******* 2026-01-13 01:18:06.596604 | orchestrator | 2026-01-13 01:18:06.596611 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:18:06.596617 | orchestrator | Tuesday 13 January 2026 01:18:04 +0000 (0:00:00.068) 0:00:23.906 ******* 2026-01-13 01:18:06.596623 | orchestrator | 2026-01-13 01:18:06.596644 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-13 01:18:06.596650 | orchestrator | Tuesday 13 January 2026 01:18:04 +0000 (0:00:00.066) 0:00:23.972 ******* 2026-01-13 01:18:06.596656 | orchestrator | 2026-01-13 01:18:06.596662 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-13 01:18:06.596668 | orchestrator | Tuesday 13 January 2026 01:18:04 +0000 (0:00:00.069) 0:00:24.041 ******* 2026-01-13 01:18:06.596674 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-13 01:18:06.596680 | orchestrator | 2026-01-13 01:18:06.596687 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-13 01:18:06.596703 | orchestrator | Tuesday 13 January 2026 01:18:05 +0000 (0:00:01.240) 0:00:25.282 ******* 2026-01-13 01:18:06.596727 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-01-13 01:18:06.596732 | orchestrator |  "msg": [ 2026-01-13 01:18:06.596737 | orchestrator |  "Validator run completed.", 2026-01-13 01:18:06.596742 | orchestrator |  "You can find the report file here:", 2026-01-13 01:18:06.596747 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-01-13T01:17:41+00:00-report.json", 2026-01-13 01:18:06.596753 | orchestrator |  "on the following host:", 2026-01-13 01:18:06.596757 | orchestrator |  "testbed-manager" 2026-01-13 01:18:06.596762 | orchestrator |  ] 2026-01-13 01:18:06.596767 | orchestrator | } 2026-01-13 01:18:06.596772 | orchestrator | 2026-01-13 01:18:06.596776 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:18:06.596782 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-13 01:18:06.596788 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-13 01:18:06.596809 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-13 01:18:06.596813 | orchestrator | 2026-01-13 01:18:06.596824 | orchestrator | 2026-01-13 01:18:06.596828 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:18:06.596833 | orchestrator | Tuesday 13 January 2026 01:18:06 +0000 (0:00:00.386) 0:00:25.668 ******* 2026-01-13 01:18:06.596837 | orchestrator | =============================================================================== 2026-01-13 01:18:06.596842 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.74s 2026-01-13 01:18:06.596846 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.14s 2026-01-13 01:18:06.596851 | orchestrator | Aggregate test results step one ----------------------------------------- 1.51s 2026-01-13 01:18:06.596855 | orchestrator | Write report file ------------------------------------------------------- 1.24s 2026-01-13 01:18:06.596860 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.90s 2026-01-13 01:18:06.596864 | orchestrator | Get timestamp for report file ------------------------------------------- 0.81s 2026-01-13 01:18:06.596868 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.69s 2026-01-13 01:18:06.596873 | orchestrator | Create report output directory ------------------------------------------ 0.67s 2026-01-13 01:18:06.596877 | orchestrator | Aggregate test results step two ----------------------------------------- 0.63s 2026-01-13 01:18:06.596882 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.60s 2026-01-13 01:18:06.596886 | orchestrator | Prepare test data ------------------------------------------------------- 0.55s 2026-01-13 01:18:06.596891 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.50s 2026-01-13 01:18:06.596895 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.49s 2026-01-13 01:18:06.596900 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.48s 2026-01-13 01:18:06.596904 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2026-01-13 01:18:06.596908 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.48s 2026-01-13 01:18:06.596913 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.48s 2026-01-13 01:18:06.596917 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.47s 2026-01-13 01:18:06.596922 | orchestrator | Aggregate test results step one ----------------------------------------- 0.47s 2026-01-13 01:18:06.596926 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.46s 2026-01-13 01:18:06.926115 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-01-13 01:18:06.932331 | orchestrator | + set -e 2026-01-13 01:18:06.932421 | orchestrator | + source /opt/manager-vars.sh 2026-01-13 01:18:06.932431 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-13 01:18:06.932439 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-13 01:18:06.932446 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-13 01:18:06.932453 | orchestrator | ++ CEPH_VERSION=reef 2026-01-13 01:18:06.932460 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-13 01:18:06.932468 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-13 01:18:06.932475 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-13 01:18:06.932955 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-13 01:18:06.933000 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-13 01:18:06.933006 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-13 01:18:06.933011 | orchestrator | ++ export ARA=false 2026-01-13 01:18:06.933015 | orchestrator | ++ ARA=false 2026-01-13 01:18:06.933019 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-13 01:18:06.933023 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-13 01:18:06.933043 | orchestrator | ++ export TEMPEST=true 2026-01-13 01:18:06.933048 | orchestrator | ++ TEMPEST=true 2026-01-13 01:18:06.933052 | orchestrator | ++ export IS_ZUUL=true 2026-01-13 01:18:06.933057 | orchestrator | ++ IS_ZUUL=true 2026-01-13 01:18:06.933061 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.234 2026-01-13 01:18:06.933065 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.234 2026-01-13 01:18:06.933069 | orchestrator | ++ export EXTERNAL_API=false 2026-01-13 01:18:06.933073 | orchestrator | ++ EXTERNAL_API=false 2026-01-13 01:18:06.933078 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-13 01:18:06.933084 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-13 01:18:06.933089 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-13 01:18:06.933121 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-13 01:18:06.933132 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-13 01:18:06.933138 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-13 01:18:06.933145 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-13 01:18:06.933151 | orchestrator | + source /etc/os-release 2026-01-13 01:18:06.933157 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-01-13 01:18:06.933163 | orchestrator | ++ NAME=Ubuntu 2026-01-13 01:18:06.933169 | orchestrator | ++ VERSION_ID=24.04 2026-01-13 01:18:06.933175 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-01-13 01:18:06.933181 | orchestrator | ++ VERSION_CODENAME=noble 2026-01-13 01:18:06.933188 | orchestrator | ++ ID=ubuntu 2026-01-13 01:18:06.933194 | orchestrator | ++ ID_LIKE=debian 2026-01-13 01:18:06.933201 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-01-13 01:18:06.933205 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-01-13 01:18:06.933209 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-01-13 01:18:06.933225 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-01-13 01:18:06.933230 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-01-13 01:18:06.933234 | orchestrator | ++ LOGO=ubuntu-logo 2026-01-13 01:18:06.933238 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-01-13 01:18:06.933243 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-01-13 01:18:06.933248 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-13 01:18:06.955869 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-13 01:18:29.651396 | orchestrator | 2026-01-13 01:18:29.651497 | orchestrator | # Status of Elasticsearch 2026-01-13 01:18:29.651506 | orchestrator | 2026-01-13 01:18:29.651511 | orchestrator | + pushd /opt/configuration/contrib 2026-01-13 01:18:29.651516 | orchestrator | + echo 2026-01-13 01:18:29.651520 | orchestrator | + echo '# Status of Elasticsearch' 2026-01-13 01:18:29.651524 | orchestrator | + echo 2026-01-13 01:18:29.651529 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-01-13 01:18:29.795971 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-01-13 01:18:29.796271 | orchestrator | 2026-01-13 01:18:29.796282 | orchestrator | + echo 2026-01-13 01:18:29.796287 | orchestrator | + echo '# Status of MariaDB' 2026-01-13 01:18:29.796292 | orchestrator | # Status of MariaDB 2026-01-13 01:18:29.796484 | orchestrator | 2026-01-13 01:18:29.796496 | orchestrator | + echo 2026-01-13 01:18:29.796739 | orchestrator | ++ semver latest 10.0.0-0 2026-01-13 01:18:29.829279 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-13 01:18:29.829362 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-13 01:18:29.829372 | orchestrator | + osism status database 2026-01-13 01:18:31.813341 | orchestrator | 2026-01-13 01:18:31 | ERROR  | Unable to get ansible vault password 2026-01-13 01:18:31.813516 | orchestrator | 2026-01-13 01:18:31 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-01-13 01:18:31.813530 | orchestrator | 2026-01-13 01:18:31 | ERROR  | Dropping encrypted entries 2026-01-13 01:18:31.845617 | orchestrator | 2026-01-13 01:18:31 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-01-13 01:18:31.856933 | orchestrator | 2026-01-13 01:18:31 | INFO  | Cluster Status: Primary 2026-01-13 01:18:31.857073 | orchestrator | 2026-01-13 01:18:31 | INFO  | Connected: ON 2026-01-13 01:18:31.858729 | orchestrator | 2026-01-13 01:18:31 | INFO  | Ready: ON 2026-01-13 01:18:31.858790 | orchestrator | 2026-01-13 01:18:31 | INFO  | Cluster Size: 3 2026-01-13 01:18:31.858799 | orchestrator | 2026-01-13 01:18:31 | INFO  | Local State: Synced 2026-01-13 01:18:31.858807 | orchestrator | 2026-01-13 01:18:31 | INFO  | Cluster State UUID: 4c7e25da-f01a-11f0-8988-c257ed0aca95 2026-01-13 01:18:31.858845 | orchestrator | 2026-01-13 01:18:31 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-01-13 01:18:31.858855 | orchestrator | 2026-01-13 01:18:31 | INFO  | Galera Version: 26.4.24(ra6b53429) 2026-01-13 01:18:31.858862 | orchestrator | 2026-01-13 01:18:31 | INFO  | Local Node UUID: 81b21e74-f01a-11f0-9b30-e2d7a1171da6 2026-01-13 01:18:31.858869 | orchestrator | 2026-01-13 01:18:31 | INFO  | Flow Control Paused: 0.00% 2026-01-13 01:18:31.858876 | orchestrator | 2026-01-13 01:18:31 | INFO  | Recv Queue Avg: 0 2026-01-13 01:18:31.858882 | orchestrator | 2026-01-13 01:18:31 | INFO  | Send Queue Avg: 0.000127307 2026-01-13 01:18:31.858889 | orchestrator | 2026-01-13 01:18:31 | INFO  | Transactions: 5073 local commits, 7788 replicated, 122 received 2026-01-13 01:18:31.858895 | orchestrator | 2026-01-13 01:18:31 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-01-13 01:18:31.858901 | orchestrator | 2026-01-13 01:18:31 | INFO  | MariaDB Uptime: 23 minutes, 23 seconds 2026-01-13 01:18:31.858908 | orchestrator | 2026-01-13 01:18:31 | INFO  | Threads: 128 connected, 1 running 2026-01-13 01:18:31.858915 | orchestrator | 2026-01-13 01:18:31 | INFO  | Queries: 137868 total, 0 slow 2026-01-13 01:18:31.858922 | orchestrator | 2026-01-13 01:18:31 | INFO  | Aborted Connects: 42 2026-01-13 01:18:31.858929 | orchestrator | 2026-01-13 01:18:31 | INFO  | MariaDB Galera Cluster validation PASSED 2026-01-13 01:18:32.164533 | orchestrator | 2026-01-13 01:18:32.164597 | orchestrator | # Status of Prometheus 2026-01-13 01:18:32.164603 | orchestrator | 2026-01-13 01:18:32.164608 | orchestrator | + echo 2026-01-13 01:18:32.164612 | orchestrator | + echo '# Status of Prometheus' 2026-01-13 01:18:32.164616 | orchestrator | + echo 2026-01-13 01:18:32.164621 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-01-13 01:18:32.227444 | orchestrator | Unauthorized 2026-01-13 01:18:32.231437 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-01-13 01:18:32.294277 | orchestrator | Unauthorized 2026-01-13 01:18:32.296890 | orchestrator | 2026-01-13 01:18:32.296966 | orchestrator | # Status of RabbitMQ 2026-01-13 01:18:32.296972 | orchestrator | 2026-01-13 01:18:32.296977 | orchestrator | + echo 2026-01-13 01:18:32.296982 | orchestrator | + echo '# Status of RabbitMQ' 2026-01-13 01:18:32.296986 | orchestrator | + echo 2026-01-13 01:18:32.297988 | orchestrator | ++ semver latest 10.0.0-0 2026-01-13 01:18:32.353561 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-13 01:18:32.353646 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-13 01:18:32.353655 | orchestrator | + osism status messaging 2026-01-13 01:18:52.912069 | orchestrator | 2026-01-13 01:18:52 | ERROR  | Unable to get ansible vault password 2026-01-13 01:18:52.912119 | orchestrator | 2026-01-13 01:18:52 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-01-13 01:18:52.912126 | orchestrator | 2026-01-13 01:18:52 | ERROR  | Dropping encrypted entries 2026-01-13 01:18:52.952447 | orchestrator | 2026-01-13 01:18:52 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-01-13 01:18:53.007133 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-01-13 01:18:53.007181 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-01-13 01:18:53.007209 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-01-13 01:18:53.007214 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-0] Cluster Size: 3 2026-01-13 01:18:53.007218 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-13 01:18:53.007223 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-13 01:18:53.007239 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-01-13 01:18:53.007244 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-0] Connections: 204, Channels: 203, Queues: 173 2026-01-13 01:18:53.007248 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-0] Messages: 218 total, 218 ready, 0 unacked 2026-01-13 01:18:53.007252 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-0] Message Rates: 7.0/s publish, 7.6/s deliver 2026-01-13 01:18:53.007256 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-0] Disk Free: 58.5 GB (limit: 0.0 GB) 2026-01-13 01:18:53.007260 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-0] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-01-13 01:18:53.007832 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-0] File Descriptors: 119/1024 2026-01-13 01:18:53.007984 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-0] Sockets: 71/832 2026-01-13 01:18:53.007998 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-01-13 01:18:53.056905 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-01-13 01:18:53.057038 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-01-13 01:18:53.057447 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-01-13 01:18:53.057874 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] Cluster Size: 3 2026-01-13 01:18:53.058317 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-13 01:18:53.058633 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-13 01:18:53.059007 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-01-13 01:18:53.059330 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] Connections: 204, Channels: 203, Queues: 173 2026-01-13 01:18:53.059631 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] Messages: 218 total, 218 ready, 0 unacked 2026-01-13 01:18:53.060078 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] Message Rates: 7.0/s publish, 7.6/s deliver 2026-01-13 01:18:53.060405 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] Disk Free: 58.8 GB (limit: 0.0 GB) 2026-01-13 01:18:53.060818 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-01-13 01:18:53.061236 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] File Descriptors: 124/1024 2026-01-13 01:18:53.062158 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-1] Sockets: 78/832 2026-01-13 01:18:53.062178 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-01-13 01:18:53.111640 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-01-13 01:18:53.111695 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-01-13 01:18:53.111706 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-01-13 01:18:53.111715 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] Cluster Size: 3 2026-01-13 01:18:53.111734 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-13 01:18:53.111754 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-13 01:18:53.111919 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-01-13 01:18:53.112589 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] Connections: 204, Channels: 203, Queues: 173 2026-01-13 01:18:53.112629 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] Messages: 218 total, 218 ready, 0 unacked 2026-01-13 01:18:53.114341 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] Message Rates: 7.0/s publish, 7.6/s deliver 2026-01-13 01:18:53.114385 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] Disk Free: 58.9 GB (limit: 0.0 GB) 2026-01-13 01:18:53.114393 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-01-13 01:18:53.114399 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] File Descriptors: 101/1024 2026-01-13 01:18:53.114406 | orchestrator | 2026-01-13 01:18:53 | INFO  | [testbed-node-2] Sockets: 55/832 2026-01-13 01:18:53.114412 | orchestrator | 2026-01-13 01:18:53 | INFO  | RabbitMQ Cluster validation PASSED 2026-01-13 01:18:53.507167 | orchestrator | 2026-01-13 01:18:53.507212 | orchestrator | # Status of Redis 2026-01-13 01:18:53.507217 | orchestrator | 2026-01-13 01:18:53.507222 | orchestrator | + echo 2026-01-13 01:18:53.507226 | orchestrator | + echo '# Status of Redis' 2026-01-13 01:18:53.507230 | orchestrator | + echo 2026-01-13 01:18:53.507234 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-01-13 01:18:53.512844 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001242s;;;0.000000;10.000000 2026-01-13 01:18:53.512914 | orchestrator | + popd 2026-01-13 01:18:53.512934 | orchestrator | 2026-01-13 01:18:53.512942 | orchestrator | # Create backup of MariaDB database 2026-01-13 01:18:53.512950 | orchestrator | 2026-01-13 01:18:53.512956 | orchestrator | + echo 2026-01-13 01:18:53.512972 | orchestrator | + echo '# Create backup of MariaDB database' 2026-01-13 01:18:53.512979 | orchestrator | + echo 2026-01-13 01:18:53.512994 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-01-13 01:18:55.723568 | orchestrator | 2026-01-13 01:18:55 | INFO  | Task 062ddd45-8a86-4a8a-9d97-879af9c04429 (mariadb_backup) was prepared for execution. 2026-01-13 01:18:55.723620 | orchestrator | 2026-01-13 01:18:55 | INFO  | It takes a moment until task 062ddd45-8a86-4a8a-9d97-879af9c04429 (mariadb_backup) has been started and output is visible here. 2026-01-13 01:19:47.850892 | orchestrator | 2026-01-13 01:19:47.851001 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-13 01:19:47.851015 | orchestrator | 2026-01-13 01:19:47.851022 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-13 01:19:47.851027 | orchestrator | Tuesday 13 January 2026 01:19:00 +0000 (0:00:00.168) 0:00:00.168 ******* 2026-01-13 01:19:47.851031 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:19:47.851036 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:19:47.851040 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:19:47.851044 | orchestrator | 2026-01-13 01:19:47.851048 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-13 01:19:47.851052 | orchestrator | Tuesday 13 January 2026 01:19:00 +0000 (0:00:00.321) 0:00:00.490 ******* 2026-01-13 01:19:47.851056 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-13 01:19:47.851060 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-13 01:19:47.851064 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-13 01:19:47.851067 | orchestrator | 2026-01-13 01:19:47.851071 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-13 01:19:47.851094 | orchestrator | 2026-01-13 01:19:47.851099 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-13 01:19:47.851103 | orchestrator | Tuesday 13 January 2026 01:19:00 +0000 (0:00:00.566) 0:00:01.057 ******* 2026-01-13 01:19:47.851110 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-13 01:19:47.851116 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-13 01:19:47.851121 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-13 01:19:47.851127 | orchestrator | 2026-01-13 01:19:47.851133 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-13 01:19:47.851139 | orchestrator | Tuesday 13 January 2026 01:19:01 +0000 (0:00:00.413) 0:00:01.470 ******* 2026-01-13 01:19:47.851147 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-13 01:19:47.851154 | orchestrator | 2026-01-13 01:19:47.851160 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-01-13 01:19:47.851166 | orchestrator | Tuesday 13 January 2026 01:19:01 +0000 (0:00:00.541) 0:00:02.012 ******* 2026-01-13 01:19:47.851172 | orchestrator | ok: [testbed-node-0] 2026-01-13 01:19:47.851178 | orchestrator | ok: [testbed-node-1] 2026-01-13 01:19:47.851185 | orchestrator | ok: [testbed-node-2] 2026-01-13 01:19:47.851191 | orchestrator | 2026-01-13 01:19:47.851212 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-01-13 01:19:47.851227 | orchestrator | Tuesday 13 January 2026 01:19:05 +0000 (0:00:03.276) 0:00:05.289 ******* 2026-01-13 01:19:47.851234 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-13 01:19:47.851241 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-01-13 01:19:47.851265 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-13 01:19:47.851273 | orchestrator | mariadb_bootstrap_restart 2026-01-13 01:19:47.851280 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:19:47.851287 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:19:47.851294 | orchestrator | changed: [testbed-node-0] 2026-01-13 01:19:47.851300 | orchestrator | 2026-01-13 01:19:47.851307 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-13 01:19:47.851313 | orchestrator | skipping: no hosts matched 2026-01-13 01:19:47.851320 | orchestrator | 2026-01-13 01:19:47.851326 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-13 01:19:47.851333 | orchestrator | skipping: no hosts matched 2026-01-13 01:19:47.851339 | orchestrator | 2026-01-13 01:19:47.851345 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-13 01:19:47.851352 | orchestrator | skipping: no hosts matched 2026-01-13 01:19:47.851358 | orchestrator | 2026-01-13 01:19:47.851365 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-13 01:19:47.851372 | orchestrator | 2026-01-13 01:19:47.851378 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-13 01:19:47.851385 | orchestrator | Tuesday 13 January 2026 01:19:46 +0000 (0:00:41.685) 0:00:46.974 ******* 2026-01-13 01:19:47.851391 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:19:47.851398 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:19:47.851405 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:19:47.851411 | orchestrator | 2026-01-13 01:19:47.851418 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-13 01:19:47.851426 | orchestrator | Tuesday 13 January 2026 01:19:47 +0000 (0:00:00.299) 0:00:47.274 ******* 2026-01-13 01:19:47.851434 | orchestrator | skipping: [testbed-node-0] 2026-01-13 01:19:47.851442 | orchestrator | skipping: [testbed-node-1] 2026-01-13 01:19:47.851449 | orchestrator | skipping: [testbed-node-2] 2026-01-13 01:19:47.851457 | orchestrator | 2026-01-13 01:19:47.851464 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:19:47.851473 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-13 01:19:47.851489 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-13 01:19:47.851498 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-13 01:19:47.851506 | orchestrator | 2026-01-13 01:19:47.851513 | orchestrator | 2026-01-13 01:19:47.851521 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:19:47.851529 | orchestrator | Tuesday 13 January 2026 01:19:47 +0000 (0:00:00.393) 0:00:47.668 ******* 2026-01-13 01:19:47.851537 | orchestrator | =============================================================================== 2026-01-13 01:19:47.851545 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 41.69s 2026-01-13 01:19:47.851569 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.28s 2026-01-13 01:19:47.851577 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-01-13 01:19:47.851584 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2026-01-13 01:19:47.851593 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2026-01-13 01:19:47.851600 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.39s 2026-01-13 01:19:47.851605 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-01-13 01:19:47.851612 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2026-01-13 01:19:48.251853 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-01-13 01:19:48.260896 | orchestrator | + set -e 2026-01-13 01:19:48.260978 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-13 01:19:48.260989 | orchestrator | ++ export INTERACTIVE=false 2026-01-13 01:19:48.260996 | orchestrator | ++ INTERACTIVE=false 2026-01-13 01:19:48.261003 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-13 01:19:48.261009 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-13 01:19:48.261023 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-13 01:19:48.262657 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-13 01:19:48.267754 | orchestrator | 2026-01-13 01:19:48.267829 | orchestrator | # OpenStack endpoints 2026-01-13 01:19:48.267839 | orchestrator | 2026-01-13 01:19:48.267846 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-13 01:19:48.267853 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-13 01:19:48.267859 | orchestrator | + export OS_CLOUD=admin 2026-01-13 01:19:48.267865 | orchestrator | + OS_CLOUD=admin 2026-01-13 01:19:48.267871 | orchestrator | + echo 2026-01-13 01:19:48.267933 | orchestrator | + echo '# OpenStack endpoints' 2026-01-13 01:19:48.267939 | orchestrator | + echo 2026-01-13 01:19:48.267945 | orchestrator | + openstack endpoint list 2026-01-13 01:19:51.557370 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-13 01:19:51.557418 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-01-13 01:19:51.557424 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-13 01:19:51.557428 | orchestrator | | 06574ad6874b4068b9bc0715f33ce4d8 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-01-13 01:19:51.557433 | orchestrator | | 19208f7c81644839bb2114dd1b2aa712 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-01-13 01:19:51.557446 | orchestrator | | 256c014a163d4bb3b0a21d6d415e3f1f | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-13 01:19:51.557460 | orchestrator | | 27ffb9623e47436c9a7b873bfe49df78 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-01-13 01:19:51.557464 | orchestrator | | 2938964b65df47a2b60afc351c312186 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-01-13 01:19:51.557469 | orchestrator | | 3556e304fef940c7ac51bba018c10a60 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-13 01:19:51.557474 | orchestrator | | 462fe6fc292349acba4e5368c004e13f | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-01-13 01:19:51.557478 | orchestrator | | 582285c0b7354a218609896e63a7c1ca | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-01-13 01:19:51.557483 | orchestrator | | 61be64e294ac46e487d30d3c3dbc5a80 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-01-13 01:19:51.557487 | orchestrator | | 620a43c623624295a911aabcdc22cff3 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-13 01:19:51.557491 | orchestrator | | 87a2354dffd945359975254d4d61b7a0 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-01-13 01:19:51.557496 | orchestrator | | 896f8523f4b14bf59f1dae3a360b1340 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-01-13 01:19:51.557500 | orchestrator | | 9d0b4c8fe69f4495be961f98e0af2b7a | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-01-13 01:19:51.557505 | orchestrator | | b437ac5f277d43388ed5f54aca4d5319 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-01-13 01:19:51.557509 | orchestrator | | b4af2bdf24e9452293a735ab95ecb545 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-13 01:19:51.557513 | orchestrator | | bb9a954b184546e4a6cbfc27a68b076a | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-01-13 01:19:51.557518 | orchestrator | | c5df2317bce0469fb96e3c128fa439a3 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-01-13 01:19:51.557522 | orchestrator | | c7487228917044509d34aef0ae5e2e99 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-01-13 01:19:51.557527 | orchestrator | | d5e461332ec449ed9e1a8039c62648d1 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-01-13 01:19:51.557531 | orchestrator | | dab0159c02e54dd38966f555de81adab | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-01-13 01:19:51.557543 | orchestrator | | df150eb0270d4d5c97a7ce8e4aac8bb7 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-01-13 01:19:51.557547 | orchestrator | | f1d61a022aaf45de8479808af7cbc2c0 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-01-13 01:19:51.557552 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-13 01:19:51.793864 | orchestrator | 2026-01-13 01:19:51.793993 | orchestrator | # Cinder 2026-01-13 01:19:51.794006 | orchestrator | 2026-01-13 01:19:51.794052 | orchestrator | + echo 2026-01-13 01:19:51.794061 | orchestrator | + echo '# Cinder' 2026-01-13 01:19:51.794068 | orchestrator | + echo 2026-01-13 01:19:51.794075 | orchestrator | + openstack volume service list 2026-01-13 01:19:54.362306 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-13 01:19:54.362369 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-01-13 01:19:54.362378 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-13 01:19:54.362384 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-13T01:19:49.000000 | 2026-01-13 01:19:54.362393 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-13T01:19:49.000000 | 2026-01-13 01:19:54.362399 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-13T01:19:49.000000 | 2026-01-13 01:19:54.362405 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-01-13T01:19:49.000000 | 2026-01-13 01:19:54.362411 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-01-13T01:19:51.000000 | 2026-01-13 01:19:54.362417 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-01-13T01:19:52.000000 | 2026-01-13 01:19:54.362442 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-01-13T01:19:44.000000 | 2026-01-13 01:19:54.362449 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-01-13T01:19:45.000000 | 2026-01-13 01:19:54.362454 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-01-13T01:19:46.000000 | 2026-01-13 01:19:54.362460 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-13 01:19:54.591120 | orchestrator | 2026-01-13 01:19:54.591198 | orchestrator | # Neutron 2026-01-13 01:19:54.591209 | orchestrator | 2026-01-13 01:19:54.591217 | orchestrator | + echo 2026-01-13 01:19:54.591224 | orchestrator | + echo '# Neutron' 2026-01-13 01:19:54.591231 | orchestrator | + echo 2026-01-13 01:19:54.591238 | orchestrator | + openstack network agent list 2026-01-13 01:19:57.198266 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-13 01:19:57.198328 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-01-13 01:19:57.198338 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-13 01:19:57.198343 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-01-13 01:19:57.198348 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-01-13 01:19:57.198352 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-01-13 01:19:57.198356 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-01-13 01:19:57.198360 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-01-13 01:19:57.198364 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-01-13 01:19:57.198367 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-13 01:19:57.198383 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-13 01:19:57.198387 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-13 01:19:57.198391 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-13 01:19:57.428321 | orchestrator | + openstack network service provider list 2026-01-13 01:19:59.984150 | orchestrator | +---------------+------+---------+ 2026-01-13 01:19:59.984243 | orchestrator | | Service Type | Name | Default | 2026-01-13 01:19:59.984252 | orchestrator | +---------------+------+---------+ 2026-01-13 01:19:59.984260 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-01-13 01:19:59.984267 | orchestrator | +---------------+------+---------+ 2026-01-13 01:20:00.246319 | orchestrator | 2026-01-13 01:20:00.246383 | orchestrator | # Nova 2026-01-13 01:20:00.246388 | orchestrator | 2026-01-13 01:20:00.246392 | orchestrator | + echo 2026-01-13 01:20:00.246396 | orchestrator | + echo '# Nova' 2026-01-13 01:20:00.246401 | orchestrator | + echo 2026-01-13 01:20:00.246405 | orchestrator | + openstack compute service list 2026-01-13 01:20:03.028824 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-13 01:20:03.029014 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-01-13 01:20:03.029028 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-13 01:20:03.029036 | orchestrator | | ad6e1ef3-4b1f-47bb-a5e8-f20d1f9715e8 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-13T01:19:55.000000 | 2026-01-13 01:20:03.029043 | orchestrator | | 02ecf3a3-9236-4c9e-a41f-4511b5e69fae | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-13T01:19:56.000000 | 2026-01-13 01:20:03.029065 | orchestrator | | e20dd9f4-0b2d-4012-8452-dcbf772284e2 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-13T01:19:57.000000 | 2026-01-13 01:20:03.029081 | orchestrator | | 6571e646-fa52-46f1-a7ee-4c48984f15e4 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-01-13T01:19:59.000000 | 2026-01-13 01:20:03.029094 | orchestrator | | 5aeeec1b-e06e-4ab7-bcd5-3e7efb785f72 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-01-13T01:20:00.000000 | 2026-01-13 01:20:03.029100 | orchestrator | | 3f0b0182-a9a1-42f3-b8ed-0f2b28e43402 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-01-13T01:20:00.000000 | 2026-01-13 01:20:03.029106 | orchestrator | | 29e50209-78a5-4012-a38a-bca2851525b0 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-01-13T01:20:00.000000 | 2026-01-13 01:20:03.029112 | orchestrator | | bb892313-8417-4ff5-8c2f-f9bf9aa0b005 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-01-13T01:20:00.000000 | 2026-01-13 01:20:03.029118 | orchestrator | | 3d63b899-2787-417e-96ce-91b3e6f96fad | nova-compute | testbed-node-4 | nova | enabled | up | 2026-01-13T01:20:01.000000 | 2026-01-13 01:20:03.029124 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-13 01:20:03.279571 | orchestrator | + openstack hypervisor list 2026-01-13 01:20:05.864199 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-13 01:20:05.864259 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-01-13 01:20:05.864265 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-13 01:20:05.864269 | orchestrator | | 8f93afd9-4cc1-49f6-82ec-3a6527093679 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-01-13 01:20:05.864272 | orchestrator | | cbb720b7-82d5-452e-bf67-987b544a2ead | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-01-13 01:20:05.864297 | orchestrator | | 1269b2e9-7f60-43bd-a2a9-9cd7fb028269 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-01-13 01:20:05.864304 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-13 01:20:06.172541 | orchestrator | + echo 2026-01-13 01:20:06.173231 | orchestrator | 2026-01-13 01:20:06.173262 | orchestrator | # Run OpenStack test play 2026-01-13 01:20:06.173268 | orchestrator | 2026-01-13 01:20:06.173272 | orchestrator | + echo '# Run OpenStack test play' 2026-01-13 01:20:06.173276 | orchestrator | + echo 2026-01-13 01:20:06.173280 | orchestrator | + osism apply --environment openstack test 2026-01-13 01:20:08.199758 | orchestrator | 2026-01-13 01:20:08 | INFO  | Trying to run play test in environment openstack 2026-01-13 01:20:18.355662 | orchestrator | 2026-01-13 01:20:18 | INFO  | Task 0889392d-28b6-4335-8b89-a3e9cb5f251a (test) was prepared for execution. 2026-01-13 01:20:18.355716 | orchestrator | 2026-01-13 01:20:18 | INFO  | It takes a moment until task 0889392d-28b6-4335-8b89-a3e9cb5f251a (test) has been started and output is visible here. 2026-01-13 01:27:17.384933 | orchestrator | 2026-01-13 01:27:17.385032 | orchestrator | PLAY [Create test project] ***************************************************** 2026-01-13 01:27:17.385046 | orchestrator | 2026-01-13 01:27:17.385051 | orchestrator | TASK [Create test domain] ****************************************************** 2026-01-13 01:27:17.385056 | orchestrator | Tuesday 13 January 2026 01:20:22 +0000 (0:00:00.051) 0:00:00.051 ******* 2026-01-13 01:27:17.385061 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385065 | orchestrator | 2026-01-13 01:27:17.385069 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-01-13 01:27:17.385074 | orchestrator | Tuesday 13 January 2026 01:20:24 +0000 (0:00:02.834) 0:00:02.885 ******* 2026-01-13 01:27:17.385078 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385082 | orchestrator | 2026-01-13 01:27:17.385086 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-01-13 01:27:17.385090 | orchestrator | Tuesday 13 January 2026 01:20:28 +0000 (0:00:03.758) 0:00:06.644 ******* 2026-01-13 01:27:17.385093 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385097 | orchestrator | 2026-01-13 01:27:17.385101 | orchestrator | TASK [Create test project] ***************************************************** 2026-01-13 01:27:17.385105 | orchestrator | Tuesday 13 January 2026 01:20:35 +0000 (0:00:06.479) 0:00:13.123 ******* 2026-01-13 01:27:17.385109 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385113 | orchestrator | 2026-01-13 01:27:17.385116 | orchestrator | TASK [Create test user] ******************************************************** 2026-01-13 01:27:17.385123 | orchestrator | Tuesday 13 January 2026 01:20:39 +0000 (0:00:03.976) 0:00:17.100 ******* 2026-01-13 01:27:17.385129 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385136 | orchestrator | 2026-01-13 01:27:17.385144 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-01-13 01:27:17.385151 | orchestrator | Tuesday 13 January 2026 01:20:43 +0000 (0:00:03.934) 0:00:21.034 ******* 2026-01-13 01:27:17.385156 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-01-13 01:27:17.385163 | orchestrator | changed: [localhost] => (item=member) 2026-01-13 01:27:17.385170 | orchestrator | changed: [localhost] => (item=creator) 2026-01-13 01:27:17.385176 | orchestrator | 2026-01-13 01:27:17.385182 | orchestrator | TASK [Create test server group] ************************************************ 2026-01-13 01:27:17.385188 | orchestrator | Tuesday 13 January 2026 01:20:54 +0000 (0:00:11.122) 0:00:32.156 ******* 2026-01-13 01:27:17.385195 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385200 | orchestrator | 2026-01-13 01:27:17.385206 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-01-13 01:27:17.385212 | orchestrator | Tuesday 13 January 2026 01:20:58 +0000 (0:00:04.112) 0:00:36.269 ******* 2026-01-13 01:27:17.385218 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385262 | orchestrator | 2026-01-13 01:27:17.385278 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-01-13 01:27:17.385285 | orchestrator | Tuesday 13 January 2026 01:21:03 +0000 (0:00:04.889) 0:00:41.159 ******* 2026-01-13 01:27:17.385313 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385319 | orchestrator | 2026-01-13 01:27:17.385325 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-01-13 01:27:17.385330 | orchestrator | Tuesday 13 January 2026 01:21:07 +0000 (0:00:04.071) 0:00:45.231 ******* 2026-01-13 01:27:17.385336 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385341 | orchestrator | 2026-01-13 01:27:17.385348 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-01-13 01:27:17.385354 | orchestrator | Tuesday 13 January 2026 01:21:11 +0000 (0:00:03.969) 0:00:49.200 ******* 2026-01-13 01:27:17.385361 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385367 | orchestrator | 2026-01-13 01:27:17.385373 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-01-13 01:27:17.385379 | orchestrator | Tuesday 13 January 2026 01:21:15 +0000 (0:00:04.176) 0:00:53.377 ******* 2026-01-13 01:27:17.385384 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385390 | orchestrator | 2026-01-13 01:27:17.385396 | orchestrator | TASK [Create test network] ***************************************************** 2026-01-13 01:27:17.385402 | orchestrator | Tuesday 13 January 2026 01:21:19 +0000 (0:00:03.732) 0:00:57.110 ******* 2026-01-13 01:27:17.385408 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385414 | orchestrator | 2026-01-13 01:27:17.385420 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-01-13 01:27:17.385429 | orchestrator | Tuesday 13 January 2026 01:21:23 +0000 (0:00:04.466) 0:01:01.577 ******* 2026-01-13 01:27:17.385437 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385442 | orchestrator | 2026-01-13 01:27:17.385448 | orchestrator | TASK [Create test router] ****************************************************** 2026-01-13 01:27:17.385454 | orchestrator | Tuesday 13 January 2026 01:21:29 +0000 (0:00:05.592) 0:01:07.169 ******* 2026-01-13 01:27:17.385459 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385464 | orchestrator | 2026-01-13 01:27:17.385471 | orchestrator | TASK [Create test instances] *************************************************** 2026-01-13 01:27:17.385477 | orchestrator | Tuesday 13 January 2026 01:21:41 +0000 (0:00:11.733) 0:01:18.902 ******* 2026-01-13 01:27:17.385483 | orchestrator | changed: [localhost] => (item=test) 2026-01-13 01:27:17.385490 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-13 01:27:17.385495 | orchestrator | 2026-01-13 01:27:17.385501 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-13 01:27:17.385506 | orchestrator | 2026-01-13 01:27:17.385513 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-13 01:27:17.385518 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-13 01:27:17.385524 | orchestrator | 2026-01-13 01:27:17.385529 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-13 01:27:17.385536 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-13 01:27:17.385542 | orchestrator | 2026-01-13 01:27:17.385549 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-13 01:27:17.385555 | orchestrator | 2026-01-13 01:27:17.385562 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-13 01:27:17.385568 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-13 01:27:17.385575 | orchestrator | 2026-01-13 01:27:17.385581 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-01-13 01:27:17.385607 | orchestrator | Tuesday 13 January 2026 01:25:55 +0000 (0:04:14.951) 0:05:33.854 ******* 2026-01-13 01:27:17.385614 | orchestrator | changed: [localhost] => (item=test) 2026-01-13 01:27:17.385620 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-13 01:27:17.385626 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-13 01:27:17.385633 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-13 01:27:17.385640 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-13 01:27:17.385647 | orchestrator | 2026-01-13 01:27:17.385654 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-01-13 01:27:17.385672 | orchestrator | Tuesday 13 January 2026 01:26:18 +0000 (0:00:22.113) 0:05:55.968 ******* 2026-01-13 01:27:17.385680 | orchestrator | changed: [localhost] => (item=test) 2026-01-13 01:27:17.385687 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-13 01:27:17.385692 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-13 01:27:17.385699 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-13 01:27:17.385704 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-13 01:27:17.385711 | orchestrator | 2026-01-13 01:27:17.385717 | orchestrator | TASK [Create test volume] ****************************************************** 2026-01-13 01:27:17.385722 | orchestrator | Tuesday 13 January 2026 01:26:51 +0000 (0:00:33.491) 0:06:29.460 ******* 2026-01-13 01:27:17.385728 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385733 | orchestrator | 2026-01-13 01:27:17.385739 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-01-13 01:27:17.385745 | orchestrator | Tuesday 13 January 2026 01:26:58 +0000 (0:00:06.858) 0:06:36.318 ******* 2026-01-13 01:27:17.385751 | orchestrator | changed: [localhost] 2026-01-13 01:27:17.385757 | orchestrator | 2026-01-13 01:27:17.385763 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-01-13 01:27:17.385767 | orchestrator | Tuesday 13 January 2026 01:27:11 +0000 (0:00:13.010) 0:06:49.328 ******* 2026-01-13 01:27:17.385771 | orchestrator | ok: [localhost] 2026-01-13 01:27:17.385775 | orchestrator | 2026-01-13 01:27:17.385778 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-01-13 01:27:17.385782 | orchestrator | Tuesday 13 January 2026 01:27:17 +0000 (0:00:05.645) 0:06:54.974 ******* 2026-01-13 01:27:17.385786 | orchestrator | ok: [localhost] => { 2026-01-13 01:27:17.385790 | orchestrator |  "msg": "192.168.112.133" 2026-01-13 01:27:17.385794 | orchestrator | } 2026-01-13 01:27:17.385798 | orchestrator | 2026-01-13 01:27:17.385801 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:27:17.385805 | orchestrator | localhost : ok=22  changed=20  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-13 01:27:17.385811 | orchestrator | 2026-01-13 01:27:17.385815 | orchestrator | 2026-01-13 01:27:17.385819 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:27:17.385829 | orchestrator | Tuesday 13 January 2026 01:27:17 +0000 (0:00:00.036) 0:06:55.010 ******* 2026-01-13 01:27:17.385833 | orchestrator | =============================================================================== 2026-01-13 01:27:17.385836 | orchestrator | Create test instances ------------------------------------------------- 254.95s 2026-01-13 01:27:17.385840 | orchestrator | Add tag to instances --------------------------------------------------- 33.49s 2026-01-13 01:27:17.385844 | orchestrator | Add metadata to instances ---------------------------------------------- 22.11s 2026-01-13 01:27:17.385848 | orchestrator | Attach test volume ----------------------------------------------------- 13.01s 2026-01-13 01:27:17.385851 | orchestrator | Create test router ----------------------------------------------------- 11.73s 2026-01-13 01:27:17.385855 | orchestrator | Add member roles to user test ------------------------------------------ 11.12s 2026-01-13 01:27:17.385859 | orchestrator | Create test volume ------------------------------------------------------ 6.86s 2026-01-13 01:27:17.385862 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.48s 2026-01-13 01:27:17.385866 | orchestrator | Create floating ip address ---------------------------------------------- 5.65s 2026-01-13 01:27:17.385870 | orchestrator | Create test subnet ------------------------------------------------------ 5.59s 2026-01-13 01:27:17.385873 | orchestrator | Create ssh security group ----------------------------------------------- 4.89s 2026-01-13 01:27:17.385877 | orchestrator | Create test network ----------------------------------------------------- 4.47s 2026-01-13 01:27:17.385881 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.18s 2026-01-13 01:27:17.385884 | orchestrator | Create test server group ------------------------------------------------ 4.11s 2026-01-13 01:27:17.385893 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.07s 2026-01-13 01:27:17.385897 | orchestrator | Create test project ----------------------------------------------------- 3.98s 2026-01-13 01:27:17.385901 | orchestrator | Create icmp security group ---------------------------------------------- 3.97s 2026-01-13 01:27:17.385905 | orchestrator | Create test user -------------------------------------------------------- 3.93s 2026-01-13 01:27:17.385908 | orchestrator | Create test-admin user -------------------------------------------------- 3.76s 2026-01-13 01:27:17.385912 | orchestrator | Create test keypair ----------------------------------------------------- 3.73s 2026-01-13 01:27:17.703552 | orchestrator | + server_list 2026-01-13 01:27:17.703687 | orchestrator | + openstack --os-cloud test server list 2026-01-13 01:27:20.982165 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-13 01:27:20.982248 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-01-13 01:27:20.982255 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-13 01:27:20.982259 | orchestrator | | 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 | test-4 | ACTIVE | test=192.168.112.182, 192.168.200.225 | N/A (booted from volume) | SCS-1L-1 | 2026-01-13 01:27:20.982263 | orchestrator | | 8ac30647-bb69-4d10-9174-753873ad881b | test-3 | ACTIVE | test=192.168.112.189, 192.168.200.82 | N/A (booted from volume) | SCS-1L-1 | 2026-01-13 01:27:20.982267 | orchestrator | | e7fd2462-0b34-4225-997f-2488fa25b38e | test-2 | ACTIVE | test=192.168.112.197, 192.168.200.207 | N/A (booted from volume) | SCS-1L-1 | 2026-01-13 01:27:20.982271 | orchestrator | | 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 | test-1 | ACTIVE | test=192.168.112.184, 192.168.200.155 | N/A (booted from volume) | SCS-1L-1 | 2026-01-13 01:27:20.982275 | orchestrator | | 105b5d47-9d05-45ab-8759-11299bd19793 | test | ACTIVE | test=192.168.112.133, 192.168.200.13 | N/A (booted from volume) | SCS-1L-1 | 2026-01-13 01:27:20.982279 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-13 01:27:21.267723 | orchestrator | + openstack --os-cloud test server show test 2026-01-13 01:27:24.304256 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:24.304323 | orchestrator | | Field | Value | 2026-01-13 01:27:24.304333 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:24.304341 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-13 01:27:24.304359 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-13 01:27:24.304366 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-13 01:27:24.304373 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-01-13 01:27:24.304380 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-13 01:27:24.304386 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-13 01:27:24.304403 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-13 01:27:24.304411 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-13 01:27:24.304418 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-13 01:27:24.304431 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-13 01:27:24.304438 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-13 01:27:24.304450 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-13 01:27:24.304457 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-13 01:27:24.304463 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-13 01:27:24.304470 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-13 01:27:24.304477 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-13T01:22:25.000000 | 2026-01-13 01:27:24.304487 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-13 01:27:24.304494 | orchestrator | | accessIPv4 | | 2026-01-13 01:27:24.304501 | orchestrator | | accessIPv6 | | 2026-01-13 01:27:24.304510 | orchestrator | | addresses | test=192.168.112.133, 192.168.200.13 | 2026-01-13 01:27:24.304521 | orchestrator | | config_drive | | 2026-01-13 01:27:24.304528 | orchestrator | | created | 2026-01-13T01:21:49Z | 2026-01-13 01:27:24.304535 | orchestrator | | description | None | 2026-01-13 01:27:24.304542 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-13 01:27:24.304549 | orchestrator | | hostId | b7c58971654cc5a37db85f5dfcac31fb8cab3c23a991b40125a50d5c | 2026-01-13 01:27:24.304555 | orchestrator | | host_status | None | 2026-01-13 01:27:24.304566 | orchestrator | | id | 105b5d47-9d05-45ab-8759-11299bd19793 | 2026-01-13 01:27:24.304573 | orchestrator | | image | N/A (booted from volume) | 2026-01-13 01:27:24.304580 | orchestrator | | key_name | test | 2026-01-13 01:27:24.304595 | orchestrator | | locked | False | 2026-01-13 01:27:24.304602 | orchestrator | | locked_reason | None | 2026-01-13 01:27:24.304609 | orchestrator | | name | test | 2026-01-13 01:27:24.304615 | orchestrator | | pinned_availability_zone | None | 2026-01-13 01:27:24.304622 | orchestrator | | progress | 0 | 2026-01-13 01:27:24.304628 | orchestrator | | project_id | 6947ea1887d64589bd716729c60f7645 | 2026-01-13 01:27:24.304635 | orchestrator | | properties | hostname='test' | 2026-01-13 01:27:24.304646 | orchestrator | | security_groups | name='ssh' | 2026-01-13 01:27:24.304652 | orchestrator | | | name='icmp' | 2026-01-13 01:27:24.304663 | orchestrator | | server_groups | None | 2026-01-13 01:27:24.304672 | orchestrator | | status | ACTIVE | 2026-01-13 01:27:24.304679 | orchestrator | | tags | test | 2026-01-13 01:27:24.304686 | orchestrator | | trusted_image_certificates | None | 2026-01-13 01:27:24.304693 | orchestrator | | updated | 2026-01-13T01:26:00Z | 2026-01-13 01:27:24.304700 | orchestrator | | user_id | cf41a0e0bd334268961c092230b5c237 | 2026-01-13 01:27:24.304707 | orchestrator | | volumes_attached | delete_on_termination='True', id='9a1dd45e-c084-46c9-9b33-74ce3d98308a' | 2026-01-13 01:27:24.304714 | orchestrator | | | delete_on_termination='False', id='6b4757db-f21e-47fa-a113-8d503701f961' | 2026-01-13 01:27:24.308755 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:24.567889 | orchestrator | + openstack --os-cloud test server show test-1 2026-01-13 01:27:27.644488 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:27.644572 | orchestrator | | Field | Value | 2026-01-13 01:27:27.644583 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:27.644591 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-13 01:27:27.644597 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-13 01:27:27.644604 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-13 01:27:27.644610 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-01-13 01:27:27.644616 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-13 01:27:27.644623 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-13 01:27:27.644643 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-13 01:27:27.644665 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-13 01:27:27.644671 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-13 01:27:27.644678 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-13 01:27:27.644686 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-13 01:27:27.644693 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-13 01:27:27.644705 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-13 01:27:27.644712 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-13 01:27:27.644719 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-13 01:27:27.644726 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-13T01:23:22.000000 | 2026-01-13 01:27:27.644746 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-13 01:27:27.644754 | orchestrator | | accessIPv4 | | 2026-01-13 01:27:27.644763 | orchestrator | | accessIPv6 | | 2026-01-13 01:27:27.644768 | orchestrator | | addresses | test=192.168.112.184, 192.168.200.155 | 2026-01-13 01:27:27.644772 | orchestrator | | config_drive | | 2026-01-13 01:27:27.644775 | orchestrator | | created | 2026-01-13T01:22:48Z | 2026-01-13 01:27:27.644779 | orchestrator | | description | None | 2026-01-13 01:27:27.644783 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-13 01:27:27.644787 | orchestrator | | hostId | 299cc453120c89f48d302e3526bfcb16d08ef82d89c546a2b6cd9172 | 2026-01-13 01:27:27.644796 | orchestrator | | host_status | None | 2026-01-13 01:27:27.644804 | orchestrator | | id | 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 | 2026-01-13 01:27:27.644808 | orchestrator | | image | N/A (booted from volume) | 2026-01-13 01:27:27.644852 | orchestrator | | key_name | test | 2026-01-13 01:27:27.644873 | orchestrator | | locked | False | 2026-01-13 01:27:27.644878 | orchestrator | | locked_reason | None | 2026-01-13 01:27:27.644881 | orchestrator | | name | test-1 | 2026-01-13 01:27:27.644885 | orchestrator | | pinned_availability_zone | None | 2026-01-13 01:27:27.644889 | orchestrator | | progress | 0 | 2026-01-13 01:27:27.644897 | orchestrator | | project_id | 6947ea1887d64589bd716729c60f7645 | 2026-01-13 01:27:27.644901 | orchestrator | | properties | hostname='test-1' | 2026-01-13 01:27:27.644910 | orchestrator | | security_groups | name='ssh' | 2026-01-13 01:27:27.644914 | orchestrator | | | name='icmp' | 2026-01-13 01:27:27.644920 | orchestrator | | server_groups | None | 2026-01-13 01:27:27.644924 | orchestrator | | status | ACTIVE | 2026-01-13 01:27:27.644928 | orchestrator | | tags | test | 2026-01-13 01:27:27.644932 | orchestrator | | trusted_image_certificates | None | 2026-01-13 01:27:27.644936 | orchestrator | | updated | 2026-01-13T01:26:04Z | 2026-01-13 01:27:27.644940 | orchestrator | | user_id | cf41a0e0bd334268961c092230b5c237 | 2026-01-13 01:27:27.644947 | orchestrator | | volumes_attached | delete_on_termination='True', id='6f2b9777-0ed8-44e1-ab2e-af2a66da4310' | 2026-01-13 01:27:27.649341 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:27.940502 | orchestrator | + openstack --os-cloud test server show test-2 2026-01-13 01:27:31.018153 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:31.018302 | orchestrator | | Field | Value | 2026-01-13 01:27:31.018331 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:31.018339 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-13 01:27:31.018346 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-13 01:27:31.018354 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-13 01:27:31.018361 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-01-13 01:27:31.018387 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-13 01:27:31.018394 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-13 01:27:31.018418 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-13 01:27:31.018425 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-13 01:27:31.018431 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-13 01:27:31.018441 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-13 01:27:31.018448 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-13 01:27:31.018454 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-13 01:27:31.018460 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-13 01:27:31.018472 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-13 01:27:31.018480 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-13 01:27:31.018488 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-13T01:24:18.000000 | 2026-01-13 01:27:31.018501 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-13 01:27:31.018507 | orchestrator | | accessIPv4 | | 2026-01-13 01:27:31.018513 | orchestrator | | accessIPv6 | | 2026-01-13 01:27:31.018520 | orchestrator | | addresses | test=192.168.112.197, 192.168.200.207 | 2026-01-13 01:27:31.018525 | orchestrator | | config_drive | | 2026-01-13 01:27:31.018531 | orchestrator | | created | 2026-01-13T01:23:43Z | 2026-01-13 01:27:31.018548 | orchestrator | | description | None | 2026-01-13 01:27:31.018555 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-13 01:27:31.018561 | orchestrator | | hostId | 3b5320a87cce66fed3f5927f5afa231c08cad5c9a8f54dc3b7cadf91 | 2026-01-13 01:27:31.018567 | orchestrator | | host_status | None | 2026-01-13 01:27:31.018577 | orchestrator | | id | e7fd2462-0b34-4225-997f-2488fa25b38e | 2026-01-13 01:27:31.018581 | orchestrator | | image | N/A (booted from volume) | 2026-01-13 01:27:31.018588 | orchestrator | | key_name | test | 2026-01-13 01:27:31.018592 | orchestrator | | locked | False | 2026-01-13 01:27:31.018595 | orchestrator | | locked_reason | None | 2026-01-13 01:27:31.018599 | orchestrator | | name | test-2 | 2026-01-13 01:27:31.018607 | orchestrator | | pinned_availability_zone | None | 2026-01-13 01:27:31.018611 | orchestrator | | progress | 0 | 2026-01-13 01:27:31.018615 | orchestrator | | project_id | 6947ea1887d64589bd716729c60f7645 | 2026-01-13 01:27:31.018618 | orchestrator | | properties | hostname='test-2' | 2026-01-13 01:27:31.018626 | orchestrator | | security_groups | name='ssh' | 2026-01-13 01:27:31.018631 | orchestrator | | | name='icmp' | 2026-01-13 01:27:31.018637 | orchestrator | | server_groups | None | 2026-01-13 01:27:31.018641 | orchestrator | | status | ACTIVE | 2026-01-13 01:27:31.018645 | orchestrator | | tags | test | 2026-01-13 01:27:31.018653 | orchestrator | | trusted_image_certificates | None | 2026-01-13 01:27:31.018657 | orchestrator | | updated | 2026-01-13T01:26:08Z | 2026-01-13 01:27:31.018660 | orchestrator | | user_id | cf41a0e0bd334268961c092230b5c237 | 2026-01-13 01:27:31.018664 | orchestrator | | volumes_attached | delete_on_termination='True', id='7da2bc0c-af16-4147-95c4-154cd1481cc8' | 2026-01-13 01:27:31.024170 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:31.335811 | orchestrator | + openstack --os-cloud test server show test-3 2026-01-13 01:27:34.253590 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:34.253638 | orchestrator | | Field | Value | 2026-01-13 01:27:34.253651 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:34.253655 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-13 01:27:34.253667 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-13 01:27:34.253670 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-13 01:27:34.253673 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-01-13 01:27:34.253677 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-13 01:27:34.253680 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-13 01:27:34.253691 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-13 01:27:34.253694 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-13 01:27:34.253698 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-13 01:27:34.253703 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-13 01:27:34.253709 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-13 01:27:34.253712 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-13 01:27:34.253715 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-13 01:27:34.253719 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-13 01:27:34.253722 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-13 01:27:34.253725 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-13T01:25:01.000000 | 2026-01-13 01:27:34.253731 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-13 01:27:34.253734 | orchestrator | | accessIPv4 | | 2026-01-13 01:27:34.253737 | orchestrator | | accessIPv6 | | 2026-01-13 01:27:34.253744 | orchestrator | | addresses | test=192.168.112.189, 192.168.200.82 | 2026-01-13 01:27:34.253748 | orchestrator | | config_drive | | 2026-01-13 01:27:34.253751 | orchestrator | | created | 2026-01-13T01:24:36Z | 2026-01-13 01:27:34.253754 | orchestrator | | description | None | 2026-01-13 01:27:34.253757 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-13 01:27:34.253761 | orchestrator | | hostId | 299cc453120c89f48d302e3526bfcb16d08ef82d89c546a2b6cd9172 | 2026-01-13 01:27:34.253764 | orchestrator | | host_status | None | 2026-01-13 01:27:34.253770 | orchestrator | | id | 8ac30647-bb69-4d10-9174-753873ad881b | 2026-01-13 01:27:34.253773 | orchestrator | | image | N/A (booted from volume) | 2026-01-13 01:27:34.253776 | orchestrator | | key_name | test | 2026-01-13 01:27:34.253783 | orchestrator | | locked | False | 2026-01-13 01:27:34.253786 | orchestrator | | locked_reason | None | 2026-01-13 01:27:34.253790 | orchestrator | | name | test-3 | 2026-01-13 01:27:34.253793 | orchestrator | | pinned_availability_zone | None | 2026-01-13 01:27:34.253796 | orchestrator | | progress | 0 | 2026-01-13 01:27:34.253799 | orchestrator | | project_id | 6947ea1887d64589bd716729c60f7645 | 2026-01-13 01:27:34.253802 | orchestrator | | properties | hostname='test-3' | 2026-01-13 01:27:34.253808 | orchestrator | | security_groups | name='ssh' | 2026-01-13 01:27:34.253811 | orchestrator | | | name='icmp' | 2026-01-13 01:27:34.253816 | orchestrator | | server_groups | None | 2026-01-13 01:27:34.253820 | orchestrator | | status | ACTIVE | 2026-01-13 01:27:34.253965 | orchestrator | | tags | test | 2026-01-13 01:27:34.253969 | orchestrator | | trusted_image_certificates | None | 2026-01-13 01:27:34.253972 | orchestrator | | updated | 2026-01-13T01:26:13Z | 2026-01-13 01:27:34.253976 | orchestrator | | user_id | cf41a0e0bd334268961c092230b5c237 | 2026-01-13 01:27:34.253979 | orchestrator | | volumes_attached | delete_on_termination='True', id='3fa8b44a-0e91-46fa-a4f5-3b822a805fe8' | 2026-01-13 01:27:34.258717 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:34.520449 | orchestrator | + openstack --os-cloud test server show test-4 2026-01-13 01:27:37.537886 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:37.537941 | orchestrator | | Field | Value | 2026-01-13 01:27:37.537946 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:37.537949 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-13 01:27:37.537953 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-13 01:27:37.537956 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-13 01:27:37.537959 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-01-13 01:27:37.537962 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-13 01:27:37.537965 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-13 01:27:37.537975 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-13 01:27:37.537986 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-13 01:27:37.537989 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-13 01:27:37.537993 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-13 01:27:37.537996 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-13 01:27:37.537999 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-13 01:27:37.538002 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-13 01:27:37.538005 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-13 01:27:37.538009 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-13 01:27:37.538012 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-13T01:25:43.000000 | 2026-01-13 01:27:37.538049 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-13 01:27:37.538053 | orchestrator | | accessIPv4 | | 2026-01-13 01:27:37.538056 | orchestrator | | accessIPv6 | | 2026-01-13 01:27:37.538059 | orchestrator | | addresses | test=192.168.112.182, 192.168.200.225 | 2026-01-13 01:27:37.538062 | orchestrator | | config_drive | | 2026-01-13 01:27:37.538065 | orchestrator | | created | 2026-01-13T01:25:18Z | 2026-01-13 01:27:37.538069 | orchestrator | | description | None | 2026-01-13 01:27:37.538072 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-13 01:27:37.538075 | orchestrator | | hostId | b7c58971654cc5a37db85f5dfcac31fb8cab3c23a991b40125a50d5c | 2026-01-13 01:27:37.538078 | orchestrator | | host_status | None | 2026-01-13 01:27:37.538088 | orchestrator | | id | 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 | 2026-01-13 01:27:37.538092 | orchestrator | | image | N/A (booted from volume) | 2026-01-13 01:27:37.538095 | orchestrator | | key_name | test | 2026-01-13 01:27:37.538098 | orchestrator | | locked | False | 2026-01-13 01:27:37.538101 | orchestrator | | locked_reason | None | 2026-01-13 01:27:37.538105 | orchestrator | | name | test-4 | 2026-01-13 01:27:37.538108 | orchestrator | | pinned_availability_zone | None | 2026-01-13 01:27:37.538111 | orchestrator | | progress | 0 | 2026-01-13 01:27:37.538114 | orchestrator | | project_id | 6947ea1887d64589bd716729c60f7645 | 2026-01-13 01:27:37.538120 | orchestrator | | properties | hostname='test-4' | 2026-01-13 01:27:37.538127 | orchestrator | | security_groups | name='ssh' | 2026-01-13 01:27:37.538131 | orchestrator | | | name='icmp' | 2026-01-13 01:27:37.538134 | orchestrator | | server_groups | None | 2026-01-13 01:27:37.538137 | orchestrator | | status | ACTIVE | 2026-01-13 01:27:37.538140 | orchestrator | | tags | test | 2026-01-13 01:27:37.538143 | orchestrator | | trusted_image_certificates | None | 2026-01-13 01:27:37.538147 | orchestrator | | updated | 2026-01-13T01:26:17Z | 2026-01-13 01:27:37.538155 | orchestrator | | user_id | cf41a0e0bd334268961c092230b5c237 | 2026-01-13 01:27:37.538160 | orchestrator | | volumes_attached | delete_on_termination='True', id='484c345a-1c17-48f4-8619-af4ffc6b78e7' | 2026-01-13 01:27:37.543078 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-13 01:27:37.806414 | orchestrator | + server_ping 2026-01-13 01:27:37.807797 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-13 01:27:37.807837 | orchestrator | ++ tr -d '\r' 2026-01-13 01:27:40.665903 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:27:40.666068 | orchestrator | + ping -c3 192.168.112.197 2026-01-13 01:27:40.685816 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-01-13 01:27:40.685889 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=9.85 ms 2026-01-13 01:27:41.679808 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=2.53 ms 2026-01-13 01:27:42.681230 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.71 ms 2026-01-13 01:27:42.681313 | orchestrator | 2026-01-13 01:27:42.681320 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-01-13 01:27:42.681326 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:27:42.681330 | orchestrator | rtt min/avg/max/mdev = 1.714/4.698/9.853/3.660 ms 2026-01-13 01:27:42.682471 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:27:42.682516 | orchestrator | + ping -c3 192.168.112.184 2026-01-13 01:27:42.694292 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2026-01-13 01:27:42.694381 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=7.24 ms 2026-01-13 01:27:43.691530 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=3.01 ms 2026-01-13 01:27:44.691021 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=1.33 ms 2026-01-13 01:27:44.691069 | orchestrator | 2026-01-13 01:27:44.691075 | orchestrator | --- 192.168.112.184 ping statistics --- 2026-01-13 01:27:44.691080 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:27:44.691084 | orchestrator | rtt min/avg/max/mdev = 1.334/3.863/7.243/2.486 ms 2026-01-13 01:27:44.692426 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:27:44.692450 | orchestrator | + ping -c3 192.168.112.182 2026-01-13 01:27:44.702466 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-01-13 01:27:44.702510 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=5.66 ms 2026-01-13 01:27:45.700935 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=1.89 ms 2026-01-13 01:27:46.702224 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.30 ms 2026-01-13 01:27:46.702298 | orchestrator | 2026-01-13 01:27:46.702306 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-01-13 01:27:46.702314 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:27:46.702320 | orchestrator | rtt min/avg/max/mdev = 1.298/2.947/5.658/1.931 ms 2026-01-13 01:27:46.702327 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:27:46.702333 | orchestrator | + ping -c3 192.168.112.189 2026-01-13 01:27:46.712174 | orchestrator | PING 192.168.112.189 (192.168.112.189) 56(84) bytes of data. 2026-01-13 01:27:46.712249 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=1 ttl=63 time=5.29 ms 2026-01-13 01:27:47.710984 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=2 ttl=63 time=1.75 ms 2026-01-13 01:27:48.713115 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=3 ttl=63 time=1.53 ms 2026-01-13 01:27:48.713176 | orchestrator | 2026-01-13 01:27:48.713232 | orchestrator | --- 192.168.112.189 ping statistics --- 2026-01-13 01:27:48.713242 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-13 01:27:48.713254 | orchestrator | rtt min/avg/max/mdev = 1.525/2.855/5.291/1.724 ms 2026-01-13 01:27:48.713920 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:27:48.713949 | orchestrator | + ping -c3 192.168.112.133 2026-01-13 01:27:48.722955 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-01-13 01:27:48.723039 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=4.22 ms 2026-01-13 01:27:49.721242 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=1.68 ms 2026-01-13 01:27:50.723171 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.38 ms 2026-01-13 01:27:50.723308 | orchestrator | 2026-01-13 01:27:50.723325 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-01-13 01:27:50.723337 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:27:50.723350 | orchestrator | rtt min/avg/max/mdev = 1.375/2.424/4.216/1.273 ms 2026-01-13 01:27:50.724106 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-13 01:27:50.724158 | orchestrator | + compute_list 2026-01-13 01:27:50.724174 | orchestrator | + osism manage compute list testbed-node-3 2026-01-13 01:27:54.241439 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:27:54.241503 | orchestrator | | ID | Name | Status | 2026-01-13 01:27:54.241513 | orchestrator | |--------------------------------------+--------+----------| 2026-01-13 01:27:54.241521 | orchestrator | | e7fd2462-0b34-4225-997f-2488fa25b38e | test-2 | ACTIVE | 2026-01-13 01:27:54.241527 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:27:54.582001 | orchestrator | + osism manage compute list testbed-node-4 2026-01-13 01:27:57.966868 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:27:57.966922 | orchestrator | | ID | Name | Status | 2026-01-13 01:27:57.966927 | orchestrator | |--------------------------------------+--------+----------| 2026-01-13 01:27:57.966932 | orchestrator | | 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 | test-4 | ACTIVE | 2026-01-13 01:27:57.966936 | orchestrator | | 105b5d47-9d05-45ab-8759-11299bd19793 | test | ACTIVE | 2026-01-13 01:27:57.966940 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:27:58.287430 | orchestrator | + osism manage compute list testbed-node-5 2026-01-13 01:28:01.494702 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:28:01.494775 | orchestrator | | ID | Name | Status | 2026-01-13 01:28:01.494784 | orchestrator | |--------------------------------------+--------+----------| 2026-01-13 01:28:01.494792 | orchestrator | | 8ac30647-bb69-4d10-9174-753873ad881b | test-3 | ACTIVE | 2026-01-13 01:28:01.494799 | orchestrator | | 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 | test-1 | ACTIVE | 2026-01-13 01:28:01.494806 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:28:01.862942 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-01-13 01:28:05.311032 | orchestrator | 2026-01-13 01:28:05 | INFO  | Live migrating server 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 2026-01-13 01:28:18.183777 | orchestrator | 2026-01-13 01:28:18 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:28:20.551387 | orchestrator | 2026-01-13 01:28:20 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:28:22.852376 | orchestrator | 2026-01-13 01:28:22 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:28:25.468496 | orchestrator | 2026-01-13 01:28:25 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:28:27.795276 | orchestrator | 2026-01-13 01:28:27 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:28:30.146509 | orchestrator | 2026-01-13 01:28:30 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:28:32.490084 | orchestrator | 2026-01-13 01:28:32 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:28:34.699892 | orchestrator | 2026-01-13 01:28:34 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:28:37.001292 | orchestrator | 2026-01-13 01:28:37 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) completed with status ACTIVE 2026-01-13 01:28:37.001342 | orchestrator | 2026-01-13 01:28:37 | INFO  | Live migrating server 105b5d47-9d05-45ab-8759-11299bd19793 2026-01-13 01:28:49.337072 | orchestrator | 2026-01-13 01:28:49 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:28:51.600451 | orchestrator | 2026-01-13 01:28:51 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:28:53.872790 | orchestrator | 2026-01-13 01:28:53 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:28:56.162951 | orchestrator | 2026-01-13 01:28:56 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:28:58.460405 | orchestrator | 2026-01-13 01:28:58 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:29:00.873524 | orchestrator | 2026-01-13 01:29:00 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:29:03.266677 | orchestrator | 2026-01-13 01:29:03 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:29:05.474923 | orchestrator | 2026-01-13 01:29:05 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:29:07.755844 | orchestrator | 2026-01-13 01:29:07 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:29:09.961388 | orchestrator | 2026-01-13 01:29:09 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:29:12.247479 | orchestrator | 2026-01-13 01:29:12 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) completed with status ACTIVE 2026-01-13 01:29:12.614779 | orchestrator | + compute_list 2026-01-13 01:29:12.614833 | orchestrator | + osism manage compute list testbed-node-3 2026-01-13 01:29:15.934372 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:29:15.934495 | orchestrator | | ID | Name | Status | 2026-01-13 01:29:15.934507 | orchestrator | |--------------------------------------+--------+----------| 2026-01-13 01:29:15.934513 | orchestrator | | 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 | test-4 | ACTIVE | 2026-01-13 01:29:15.934520 | orchestrator | | e7fd2462-0b34-4225-997f-2488fa25b38e | test-2 | ACTIVE | 2026-01-13 01:29:15.934526 | orchestrator | | 105b5d47-9d05-45ab-8759-11299bd19793 | test | ACTIVE | 2026-01-13 01:29:15.934533 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:29:16.322819 | orchestrator | + osism manage compute list testbed-node-4 2026-01-13 01:29:19.154649 | orchestrator | +------+--------+----------+ 2026-01-13 01:29:19.154708 | orchestrator | | ID | Name | Status | 2026-01-13 01:29:19.154713 | orchestrator | |------+--------+----------| 2026-01-13 01:29:19.154717 | orchestrator | +------+--------+----------+ 2026-01-13 01:29:19.482859 | orchestrator | + osism manage compute list testbed-node-5 2026-01-13 01:29:22.577225 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:29:22.577352 | orchestrator | | ID | Name | Status | 2026-01-13 01:29:22.577369 | orchestrator | |--------------------------------------+--------+----------| 2026-01-13 01:29:22.577381 | orchestrator | | 8ac30647-bb69-4d10-9174-753873ad881b | test-3 | ACTIVE | 2026-01-13 01:29:22.577393 | orchestrator | | 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 | test-1 | ACTIVE | 2026-01-13 01:29:22.577403 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:29:22.934206 | orchestrator | + server_ping 2026-01-13 01:29:22.935660 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-13 01:29:22.935714 | orchestrator | ++ tr -d '\r' 2026-01-13 01:29:25.794761 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:29:25.794846 | orchestrator | + ping -c3 192.168.112.197 2026-01-13 01:29:25.804786 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-01-13 01:29:25.804866 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=7.49 ms 2026-01-13 01:29:26.800765 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=1.61 ms 2026-01-13 01:29:27.802946 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.34 ms 2026-01-13 01:29:27.803007 | orchestrator | 2026-01-13 01:29:27.803016 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-01-13 01:29:27.803023 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-13 01:29:27.803029 | orchestrator | rtt min/avg/max/mdev = 1.341/3.482/7.494/2.838 ms 2026-01-13 01:29:27.803595 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:29:27.803628 | orchestrator | + ping -c3 192.168.112.184 2026-01-13 01:29:27.812674 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2026-01-13 01:29:27.812735 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=4.36 ms 2026-01-13 01:29:28.812725 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=1.88 ms 2026-01-13 01:29:29.813951 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=1.54 ms 2026-01-13 01:29:29.814007 | orchestrator | 2026-01-13 01:29:29.814068 | orchestrator | --- 192.168.112.184 ping statistics --- 2026-01-13 01:29:29.814075 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:29:29.814080 | orchestrator | rtt min/avg/max/mdev = 1.536/2.592/4.360/1.257 ms 2026-01-13 01:29:29.815804 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:29:29.815854 | orchestrator | + ping -c3 192.168.112.182 2026-01-13 01:29:29.824592 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-01-13 01:29:29.824649 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=4.10 ms 2026-01-13 01:29:30.823341 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=1.41 ms 2026-01-13 01:29:31.825248 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.17 ms 2026-01-13 01:29:31.825293 | orchestrator | 2026-01-13 01:29:31.825298 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-01-13 01:29:31.825302 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:29:31.825305 | orchestrator | rtt min/avg/max/mdev = 1.171/2.227/4.100/1.327 ms 2026-01-13 01:29:31.825374 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:29:31.825377 | orchestrator | + ping -c3 192.168.112.189 2026-01-13 01:29:31.837359 | orchestrator | PING 192.168.112.189 (192.168.112.189) 56(84) bytes of data. 2026-01-13 01:29:31.837405 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=1 ttl=63 time=6.53 ms 2026-01-13 01:29:32.834319 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=2 ttl=63 time=1.64 ms 2026-01-13 01:29:33.836571 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=3 ttl=63 time=1.39 ms 2026-01-13 01:29:33.836632 | orchestrator | 2026-01-13 01:29:33.836641 | orchestrator | --- 192.168.112.189 ping statistics --- 2026-01-13 01:29:33.836649 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-13 01:29:33.836657 | orchestrator | rtt min/avg/max/mdev = 1.391/3.188/6.533/2.367 ms 2026-01-13 01:29:33.837241 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:29:33.837287 | orchestrator | + ping -c3 192.168.112.133 2026-01-13 01:29:33.845464 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-01-13 01:29:33.845513 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=4.03 ms 2026-01-13 01:29:34.844225 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=1.37 ms 2026-01-13 01:29:35.845974 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.24 ms 2026-01-13 01:29:35.846144 | orchestrator | 2026-01-13 01:29:35.846150 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-01-13 01:29:35.846618 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:29:35.846644 | orchestrator | rtt min/avg/max/mdev = 1.243/2.213/4.026/1.282 ms 2026-01-13 01:29:35.846658 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-01-13 01:29:39.206118 | orchestrator | 2026-01-13 01:29:39 | INFO  | Live migrating server 8ac30647-bb69-4d10-9174-753873ad881b 2026-01-13 01:29:50.239377 | orchestrator | 2026-01-13 01:29:50 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:29:52.549675 | orchestrator | 2026-01-13 01:29:52 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:29:54.815612 | orchestrator | 2026-01-13 01:29:54 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:29:57.183327 | orchestrator | 2026-01-13 01:29:57 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:29:59.510302 | orchestrator | 2026-01-13 01:29:59 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:30:01.721873 | orchestrator | 2026-01-13 01:30:01 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:30:03.920620 | orchestrator | 2026-01-13 01:30:03 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:30:06.132348 | orchestrator | 2026-01-13 01:30:06 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:30:08.382641 | orchestrator | 2026-01-13 01:30:08 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:30:10.643083 | orchestrator | 2026-01-13 01:30:10 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) completed with status ACTIVE 2026-01-13 01:30:10.643132 | orchestrator | 2026-01-13 01:30:10 | INFO  | Live migrating server 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 2026-01-13 01:30:20.743966 | orchestrator | 2026-01-13 01:30:20 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:30:23.007058 | orchestrator | 2026-01-13 01:30:23 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:30:25.279280 | orchestrator | 2026-01-13 01:30:25 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:30:27.554345 | orchestrator | 2026-01-13 01:30:27 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:30:29.926402 | orchestrator | 2026-01-13 01:30:29 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:30:32.329903 | orchestrator | 2026-01-13 01:30:32 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:30:34.695362 | orchestrator | 2026-01-13 01:30:34 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:30:36.913252 | orchestrator | 2026-01-13 01:30:36 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:30:39.230267 | orchestrator | 2026-01-13 01:30:39 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) completed with status ACTIVE 2026-01-13 01:30:39.582604 | orchestrator | + compute_list 2026-01-13 01:30:39.582698 | orchestrator | + osism manage compute list testbed-node-3 2026-01-13 01:30:43.105909 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:30:43.106053 | orchestrator | | ID | Name | Status | 2026-01-13 01:30:43.106068 | orchestrator | |--------------------------------------+--------+----------| 2026-01-13 01:30:43.106075 | orchestrator | | 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 | test-4 | ACTIVE | 2026-01-13 01:30:43.106081 | orchestrator | | 8ac30647-bb69-4d10-9174-753873ad881b | test-3 | ACTIVE | 2026-01-13 01:30:43.106088 | orchestrator | | e7fd2462-0b34-4225-997f-2488fa25b38e | test-2 | ACTIVE | 2026-01-13 01:30:43.106095 | orchestrator | | 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 | test-1 | ACTIVE | 2026-01-13 01:30:43.106102 | orchestrator | | 105b5d47-9d05-45ab-8759-11299bd19793 | test | ACTIVE | 2026-01-13 01:30:43.106108 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:30:43.443781 | orchestrator | + osism manage compute list testbed-node-4 2026-01-13 01:30:46.167691 | orchestrator | +------+--------+----------+ 2026-01-13 01:30:46.167758 | orchestrator | | ID | Name | Status | 2026-01-13 01:30:46.167769 | orchestrator | |------+--------+----------| 2026-01-13 01:30:46.167777 | orchestrator | +------+--------+----------+ 2026-01-13 01:30:46.479345 | orchestrator | + osism manage compute list testbed-node-5 2026-01-13 01:30:49.238451 | orchestrator | +------+--------+----------+ 2026-01-13 01:30:49.238507 | orchestrator | | ID | Name | Status | 2026-01-13 01:30:49.238513 | orchestrator | |------+--------+----------| 2026-01-13 01:30:49.238517 | orchestrator | +------+--------+----------+ 2026-01-13 01:30:49.574495 | orchestrator | + server_ping 2026-01-13 01:30:49.575377 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-13 01:30:49.575731 | orchestrator | ++ tr -d '\r' 2026-01-13 01:30:52.502202 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:30:52.502293 | orchestrator | + ping -c3 192.168.112.197 2026-01-13 01:30:52.510443 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-01-13 01:30:52.510527 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=5.99 ms 2026-01-13 01:30:53.509013 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=2.77 ms 2026-01-13 01:30:54.510489 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.85 ms 2026-01-13 01:30:54.510590 | orchestrator | 2026-01-13 01:30:54.510598 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-01-13 01:30:54.510604 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:30:54.510609 | orchestrator | rtt min/avg/max/mdev = 1.846/3.535/5.993/1.778 ms 2026-01-13 01:30:54.511205 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:30:54.511379 | orchestrator | + ping -c3 192.168.112.184 2026-01-13 01:30:54.522151 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2026-01-13 01:30:54.522224 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=6.14 ms 2026-01-13 01:30:55.519408 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=1.60 ms 2026-01-13 01:30:56.520844 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=1.36 ms 2026-01-13 01:30:56.520900 | orchestrator | 2026-01-13 01:30:56.520909 | orchestrator | --- 192.168.112.184 ping statistics --- 2026-01-13 01:30:56.520916 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:30:56.520923 | orchestrator | rtt min/avg/max/mdev = 1.363/3.032/6.136/2.196 ms 2026-01-13 01:30:56.521034 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:30:56.521083 | orchestrator | + ping -c3 192.168.112.182 2026-01-13 01:30:56.530174 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-01-13 01:30:56.530231 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=3.73 ms 2026-01-13 01:30:57.531048 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=1.89 ms 2026-01-13 01:30:58.530681 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.32 ms 2026-01-13 01:30:58.530732 | orchestrator | 2026-01-13 01:30:58.530739 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-01-13 01:30:58.530744 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:30:58.530748 | orchestrator | rtt min/avg/max/mdev = 1.318/2.315/3.734/1.030 ms 2026-01-13 01:30:58.531382 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:30:58.531520 | orchestrator | + ping -c3 192.168.112.189 2026-01-13 01:30:58.541260 | orchestrator | PING 192.168.112.189 (192.168.112.189) 56(84) bytes of data. 2026-01-13 01:30:58.541304 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=1 ttl=63 time=4.87 ms 2026-01-13 01:30:59.539209 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=2 ttl=63 time=1.51 ms 2026-01-13 01:31:00.540671 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=3 ttl=63 time=1.13 ms 2026-01-13 01:31:00.540729 | orchestrator | 2026-01-13 01:31:00.540735 | orchestrator | --- 192.168.112.189 ping statistics --- 2026-01-13 01:31:00.540741 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:31:00.540745 | orchestrator | rtt min/avg/max/mdev = 1.134/2.505/4.871/1.679 ms 2026-01-13 01:31:00.542297 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:31:00.542332 | orchestrator | + ping -c3 192.168.112.133 2026-01-13 01:31:00.549790 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-01-13 01:31:00.549838 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=3.11 ms 2026-01-13 01:31:01.550746 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.25 ms 2026-01-13 01:31:02.551606 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.70 ms 2026-01-13 01:31:02.551679 | orchestrator | 2026-01-13 01:31:02.551689 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-01-13 01:31:02.551697 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:31:02.551713 | orchestrator | rtt min/avg/max/mdev = 1.701/2.354/3.111/0.580 ms 2026-01-13 01:31:02.552396 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-01-13 01:31:06.067731 | orchestrator | 2026-01-13 01:31:06 | INFO  | Live migrating server 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 2026-01-13 01:31:18.164009 | orchestrator | 2026-01-13 01:31:18 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:31:20.437543 | orchestrator | 2026-01-13 01:31:20 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:31:22.721380 | orchestrator | 2026-01-13 01:31:22 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:31:24.987411 | orchestrator | 2026-01-13 01:31:24 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:31:27.367962 | orchestrator | 2026-01-13 01:31:27 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:31:29.668484 | orchestrator | 2026-01-13 01:31:29 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:31:31.892569 | orchestrator | 2026-01-13 01:31:31 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:31:34.137461 | orchestrator | 2026-01-13 01:31:34 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:31:36.374558 | orchestrator | 2026-01-13 01:31:36 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) completed with status ACTIVE 2026-01-13 01:31:36.374644 | orchestrator | 2026-01-13 01:31:36 | INFO  | Live migrating server 8ac30647-bb69-4d10-9174-753873ad881b 2026-01-13 01:31:48.659712 | orchestrator | 2026-01-13 01:31:48 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:31:51.064551 | orchestrator | 2026-01-13 01:31:51 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:31:53.476049 | orchestrator | 2026-01-13 01:31:53 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:31:55.676774 | orchestrator | 2026-01-13 01:31:55 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:31:57.874790 | orchestrator | 2026-01-13 01:31:57 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:32:00.087432 | orchestrator | 2026-01-13 01:32:00 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:32:02.306939 | orchestrator | 2026-01-13 01:32:02 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:32:04.632074 | orchestrator | 2026-01-13 01:32:04 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:32:07.218747 | orchestrator | 2026-01-13 01:32:07 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) completed with status ACTIVE 2026-01-13 01:32:07.218941 | orchestrator | 2026-01-13 01:32:07 | INFO  | Live migrating server e7fd2462-0b34-4225-997f-2488fa25b38e 2026-01-13 01:32:18.005469 | orchestrator | 2026-01-13 01:32:18 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:32:20.402237 | orchestrator | 2026-01-13 01:32:20 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:32:22.779670 | orchestrator | 2026-01-13 01:32:22 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:32:25.027702 | orchestrator | 2026-01-13 01:32:25 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:32:27.341237 | orchestrator | 2026-01-13 01:32:27 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:32:29.544134 | orchestrator | 2026-01-13 01:32:29 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:32:31.783927 | orchestrator | 2026-01-13 01:32:31 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:32:34.011467 | orchestrator | 2026-01-13 01:32:34 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:32:36.403991 | orchestrator | 2026-01-13 01:32:36 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:32:38.601011 | orchestrator | 2026-01-13 01:32:38 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) completed with status ACTIVE 2026-01-13 01:32:38.601063 | orchestrator | 2026-01-13 01:32:38 | INFO  | Live migrating server 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 2026-01-13 01:32:48.674607 | orchestrator | 2026-01-13 01:32:48 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:32:50.941451 | orchestrator | 2026-01-13 01:32:50 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:32:53.214985 | orchestrator | 2026-01-13 01:32:53 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:32:55.420350 | orchestrator | 2026-01-13 01:32:55 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:32:57.614930 | orchestrator | 2026-01-13 01:32:57 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:32:59.819076 | orchestrator | 2026-01-13 01:32:59 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:33:02.038899 | orchestrator | 2026-01-13 01:33:02 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:33:04.272316 | orchestrator | 2026-01-13 01:33:04 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:33:06.578816 | orchestrator | 2026-01-13 01:33:06 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) completed with status ACTIVE 2026-01-13 01:33:06.578864 | orchestrator | 2026-01-13 01:33:06 | INFO  | Live migrating server 105b5d47-9d05-45ab-8759-11299bd19793 2026-01-13 01:33:16.232181 | orchestrator | 2026-01-13 01:33:16 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:33:18.650836 | orchestrator | 2026-01-13 01:33:18 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:33:21.044332 | orchestrator | 2026-01-13 01:33:21 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:33:23.310676 | orchestrator | 2026-01-13 01:33:23 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:33:25.573917 | orchestrator | 2026-01-13 01:33:25 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:33:27.872973 | orchestrator | 2026-01-13 01:33:27 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:33:30.242784 | orchestrator | 2026-01-13 01:33:30 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:33:32.547826 | orchestrator | 2026-01-13 01:33:32 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:33:34.832599 | orchestrator | 2026-01-13 01:33:34 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:33:37.051461 | orchestrator | 2026-01-13 01:33:37 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:33:39.384253 | orchestrator | 2026-01-13 01:33:39 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) completed with status ACTIVE 2026-01-13 01:33:39.724326 | orchestrator | + compute_list 2026-01-13 01:33:39.724396 | orchestrator | + osism manage compute list testbed-node-3 2026-01-13 01:33:42.634216 | orchestrator | +------+--------+----------+ 2026-01-13 01:33:42.634294 | orchestrator | | ID | Name | Status | 2026-01-13 01:33:42.634300 | orchestrator | |------+--------+----------| 2026-01-13 01:33:42.634305 | orchestrator | +------+--------+----------+ 2026-01-13 01:33:43.034592 | orchestrator | + osism manage compute list testbed-node-4 2026-01-13 01:33:46.193432 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:33:46.193477 | orchestrator | | ID | Name | Status | 2026-01-13 01:33:46.193482 | orchestrator | |--------------------------------------+--------+----------| 2026-01-13 01:33:46.193485 | orchestrator | | 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 | test-4 | ACTIVE | 2026-01-13 01:33:46.193489 | orchestrator | | 8ac30647-bb69-4d10-9174-753873ad881b | test-3 | ACTIVE | 2026-01-13 01:33:46.193505 | orchestrator | | e7fd2462-0b34-4225-997f-2488fa25b38e | test-2 | ACTIVE | 2026-01-13 01:33:46.193508 | orchestrator | | 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 | test-1 | ACTIVE | 2026-01-13 01:33:46.193512 | orchestrator | | 105b5d47-9d05-45ab-8759-11299bd19793 | test | ACTIVE | 2026-01-13 01:33:46.193515 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:33:46.564568 | orchestrator | + osism manage compute list testbed-node-5 2026-01-13 01:33:49.235986 | orchestrator | +------+--------+----------+ 2026-01-13 01:33:49.236049 | orchestrator | | ID | Name | Status | 2026-01-13 01:33:49.236065 | orchestrator | |------+--------+----------| 2026-01-13 01:33:49.236077 | orchestrator | +------+--------+----------+ 2026-01-13 01:33:49.568651 | orchestrator | + server_ping 2026-01-13 01:33:49.570214 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-13 01:33:49.570265 | orchestrator | ++ tr -d '\r' 2026-01-13 01:33:52.446116 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:33:52.446169 | orchestrator | + ping -c3 192.168.112.197 2026-01-13 01:33:52.452493 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-01-13 01:33:52.452553 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=4.01 ms 2026-01-13 01:33:53.452755 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=2.79 ms 2026-01-13 01:33:54.454520 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=2.05 ms 2026-01-13 01:33:54.454612 | orchestrator | 2026-01-13 01:33:54.454620 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-01-13 01:33:54.454625 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:33:54.454630 | orchestrator | rtt min/avg/max/mdev = 2.053/2.952/4.012/0.807 ms 2026-01-13 01:33:54.455201 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:33:54.455257 | orchestrator | + ping -c3 192.168.112.184 2026-01-13 01:33:54.466414 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2026-01-13 01:33:54.466501 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=6.93 ms 2026-01-13 01:33:55.462728 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=2.04 ms 2026-01-13 01:33:56.464451 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=2.46 ms 2026-01-13 01:33:56.464561 | orchestrator | 2026-01-13 01:33:56.464572 | orchestrator | --- 192.168.112.184 ping statistics --- 2026-01-13 01:33:56.464580 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-13 01:33:56.464587 | orchestrator | rtt min/avg/max/mdev = 2.043/3.812/6.933/2.213 ms 2026-01-13 01:33:56.465340 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:33:56.465397 | orchestrator | + ping -c3 192.168.112.182 2026-01-13 01:33:56.479971 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-01-13 01:33:56.480046 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=9.59 ms 2026-01-13 01:33:57.475033 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.55 ms 2026-01-13 01:33:58.474609 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.24 ms 2026-01-13 01:33:58.474669 | orchestrator | 2026-01-13 01:33:58.474710 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-01-13 01:33:58.474717 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:33:58.474722 | orchestrator | rtt min/avg/max/mdev = 1.244/4.460/9.590/3.666 ms 2026-01-13 01:33:58.475116 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:33:58.475152 | orchestrator | + ping -c3 192.168.112.189 2026-01-13 01:33:58.483287 | orchestrator | PING 192.168.112.189 (192.168.112.189) 56(84) bytes of data. 2026-01-13 01:33:58.483342 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=1 ttl=63 time=3.65 ms 2026-01-13 01:33:59.483046 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=2 ttl=63 time=1.39 ms 2026-01-13 01:34:00.484544 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=3 ttl=63 time=1.19 ms 2026-01-13 01:34:00.484603 | orchestrator | 2026-01-13 01:34:00.484614 | orchestrator | --- 192.168.112.189 ping statistics --- 2026-01-13 01:34:00.484623 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:34:00.484647 | orchestrator | rtt min/avg/max/mdev = 1.191/2.075/3.646/1.113 ms 2026-01-13 01:34:00.485494 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:34:00.485531 | orchestrator | + ping -c3 192.168.112.133 2026-01-13 01:34:00.494590 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-01-13 01:34:00.494650 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=3.97 ms 2026-01-13 01:34:01.493786 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=1.71 ms 2026-01-13 01:34:02.495557 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.58 ms 2026-01-13 01:34:02.495619 | orchestrator | 2026-01-13 01:34:02.495630 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-01-13 01:34:02.495637 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:34:02.495644 | orchestrator | rtt min/avg/max/mdev = 1.579/2.418/3.965/1.095 ms 2026-01-13 01:34:02.497118 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-01-13 01:34:06.087612 | orchestrator | 2026-01-13 01:34:06 | INFO  | Live migrating server 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 2026-01-13 01:34:15.319503 | orchestrator | 2026-01-13 01:34:15 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:34:17.725563 | orchestrator | 2026-01-13 01:34:17 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:34:20.113315 | orchestrator | 2026-01-13 01:34:20 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:34:22.287902 | orchestrator | 2026-01-13 01:34:22 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:34:24.502283 | orchestrator | 2026-01-13 01:34:24 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:34:26.725087 | orchestrator | 2026-01-13 01:34:26 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:34:29.043502 | orchestrator | 2026-01-13 01:34:29 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:34:31.361083 | orchestrator | 2026-01-13 01:34:31 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) is still in progress 2026-01-13 01:34:33.663686 | orchestrator | 2026-01-13 01:34:33 | INFO  | Live migration of 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 (test-4) completed with status ACTIVE 2026-01-13 01:34:33.663786 | orchestrator | 2026-01-13 01:34:33 | INFO  | Live migrating server 8ac30647-bb69-4d10-9174-753873ad881b 2026-01-13 01:34:43.401247 | orchestrator | 2026-01-13 01:34:43 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:34:45.749917 | orchestrator | 2026-01-13 01:34:45 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:34:48.083864 | orchestrator | 2026-01-13 01:34:48 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:34:50.484450 | orchestrator | 2026-01-13 01:34:50 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:34:52.830796 | orchestrator | 2026-01-13 01:34:52 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:34:55.063091 | orchestrator | 2026-01-13 01:34:55 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:34:57.270255 | orchestrator | 2026-01-13 01:34:57 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:34:59.468876 | orchestrator | 2026-01-13 01:34:59 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) is still in progress 2026-01-13 01:35:01.844684 | orchestrator | 2026-01-13 01:35:01 | INFO  | Live migration of 8ac30647-bb69-4d10-9174-753873ad881b (test-3) completed with status ACTIVE 2026-01-13 01:35:01.844733 | orchestrator | 2026-01-13 01:35:01 | INFO  | Live migrating server e7fd2462-0b34-4225-997f-2488fa25b38e 2026-01-13 01:35:10.818746 | orchestrator | 2026-01-13 01:35:10 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:35:13.062647 | orchestrator | 2026-01-13 01:35:13 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:35:15.305939 | orchestrator | 2026-01-13 01:35:15 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:35:17.633200 | orchestrator | 2026-01-13 01:35:17 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:35:20.022693 | orchestrator | 2026-01-13 01:35:20 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:35:22.304095 | orchestrator | 2026-01-13 01:35:22 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:35:24.520207 | orchestrator | 2026-01-13 01:35:24 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:35:26.735694 | orchestrator | 2026-01-13 01:35:26 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) is still in progress 2026-01-13 01:35:29.062507 | orchestrator | 2026-01-13 01:35:29 | INFO  | Live migration of e7fd2462-0b34-4225-997f-2488fa25b38e (test-2) completed with status ACTIVE 2026-01-13 01:35:29.062631 | orchestrator | 2026-01-13 01:35:29 | INFO  | Live migrating server 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 2026-01-13 01:35:39.006053 | orchestrator | 2026-01-13 01:35:39 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:35:41.380976 | orchestrator | 2026-01-13 01:35:41 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:35:43.712271 | orchestrator | 2026-01-13 01:35:43 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:35:46.011060 | orchestrator | 2026-01-13 01:35:46 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:35:48.285746 | orchestrator | 2026-01-13 01:35:48 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:35:50.525894 | orchestrator | 2026-01-13 01:35:50 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:35:52.823818 | orchestrator | 2026-01-13 01:35:52 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:35:55.262916 | orchestrator | 2026-01-13 01:35:55 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) is still in progress 2026-01-13 01:35:57.645964 | orchestrator | 2026-01-13 01:35:57 | INFO  | Live migration of 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 (test-1) completed with status ACTIVE 2026-01-13 01:35:57.646103 | orchestrator | 2026-01-13 01:35:57 | INFO  | Live migrating server 105b5d47-9d05-45ab-8759-11299bd19793 2026-01-13 01:36:07.591442 | orchestrator | 2026-01-13 01:36:07 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:36:09.866173 | orchestrator | 2026-01-13 01:36:09 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:36:12.124838 | orchestrator | 2026-01-13 01:36:12 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:36:14.434695 | orchestrator | 2026-01-13 01:36:14 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:36:16.739908 | orchestrator | 2026-01-13 01:36:16 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:36:19.030774 | orchestrator | 2026-01-13 01:36:19 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:36:21.348662 | orchestrator | 2026-01-13 01:36:21 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:36:23.636127 | orchestrator | 2026-01-13 01:36:23 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:36:25.912895 | orchestrator | 2026-01-13 01:36:25 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:36:28.297098 | orchestrator | 2026-01-13 01:36:28 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) is still in progress 2026-01-13 01:36:30.684165 | orchestrator | 2026-01-13 01:36:30 | INFO  | Live migration of 105b5d47-9d05-45ab-8759-11299bd19793 (test) completed with status ACTIVE 2026-01-13 01:36:31.042153 | orchestrator | + compute_list 2026-01-13 01:36:31.042224 | orchestrator | + osism manage compute list testbed-node-3 2026-01-13 01:36:33.869150 | orchestrator | +------+--------+----------+ 2026-01-13 01:36:33.869248 | orchestrator | | ID | Name | Status | 2026-01-13 01:36:33.869261 | orchestrator | |------+--------+----------| 2026-01-13 01:36:33.869270 | orchestrator | +------+--------+----------+ 2026-01-13 01:36:34.255407 | orchestrator | + osism manage compute list testbed-node-4 2026-01-13 01:36:37.085343 | orchestrator | +------+--------+----------+ 2026-01-13 01:36:37.085458 | orchestrator | | ID | Name | Status | 2026-01-13 01:36:37.085470 | orchestrator | |------+--------+----------| 2026-01-13 01:36:37.085478 | orchestrator | +------+--------+----------+ 2026-01-13 01:36:37.444095 | orchestrator | + osism manage compute list testbed-node-5 2026-01-13 01:36:40.646890 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:36:40.646960 | orchestrator | | ID | Name | Status | 2026-01-13 01:36:40.646966 | orchestrator | |--------------------------------------+--------+----------| 2026-01-13 01:36:40.646971 | orchestrator | | 1b15012c-1cbf-4117-a3aa-ee944b91d5d2 | test-4 | ACTIVE | 2026-01-13 01:36:40.646976 | orchestrator | | 8ac30647-bb69-4d10-9174-753873ad881b | test-3 | ACTIVE | 2026-01-13 01:36:40.646980 | orchestrator | | e7fd2462-0b34-4225-997f-2488fa25b38e | test-2 | ACTIVE | 2026-01-13 01:36:40.646984 | orchestrator | | 6dc66e2a-091f-4945-a07c-d12a01a3a7a2 | test-1 | ACTIVE | 2026-01-13 01:36:40.646988 | orchestrator | | 105b5d47-9d05-45ab-8759-11299bd19793 | test | ACTIVE | 2026-01-13 01:36:40.646992 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-13 01:36:40.977582 | orchestrator | + server_ping 2026-01-13 01:36:40.978409 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-13 01:36:40.979008 | orchestrator | ++ tr -d '\r' 2026-01-13 01:36:43.866253 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:36:43.866337 | orchestrator | + ping -c3 192.168.112.197 2026-01-13 01:36:43.877811 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-01-13 01:36:43.877906 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=8.48 ms 2026-01-13 01:36:44.873440 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=2.13 ms 2026-01-13 01:36:45.875135 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.60 ms 2026-01-13 01:36:45.875208 | orchestrator | 2026-01-13 01:36:45.875235 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-01-13 01:36:45.875243 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:36:45.875249 | orchestrator | rtt min/avg/max/mdev = 1.596/4.068/8.483/3.129 ms 2026-01-13 01:36:45.875907 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:36:45.875939 | orchestrator | + ping -c3 192.168.112.184 2026-01-13 01:36:45.884420 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2026-01-13 01:36:45.884498 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=4.66 ms 2026-01-13 01:36:46.884108 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=2.29 ms 2026-01-13 01:36:47.885073 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=1.52 ms 2026-01-13 01:36:47.885687 | orchestrator | 2026-01-13 01:36:47.885716 | orchestrator | --- 192.168.112.184 ping statistics --- 2026-01-13 01:36:47.885731 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-13 01:36:47.885743 | orchestrator | rtt min/avg/max/mdev = 1.522/2.825/4.659/1.334 ms 2026-01-13 01:36:47.886062 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:36:47.886088 | orchestrator | + ping -c3 192.168.112.182 2026-01-13 01:36:47.894644 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-01-13 01:36:47.894690 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=3.96 ms 2026-01-13 01:36:48.894118 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.01 ms 2026-01-13 01:36:49.895390 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.74 ms 2026-01-13 01:36:49.895468 | orchestrator | 2026-01-13 01:36:49.895475 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-01-13 01:36:49.895481 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-13 01:36:49.895486 | orchestrator | rtt min/avg/max/mdev = 1.738/2.567/3.956/0.988 ms 2026-01-13 01:36:49.896255 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:36:49.896290 | orchestrator | + ping -c3 192.168.112.189 2026-01-13 01:36:49.905103 | orchestrator | PING 192.168.112.189 (192.168.112.189) 56(84) bytes of data. 2026-01-13 01:36:49.905195 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=1 ttl=63 time=4.39 ms 2026-01-13 01:36:50.903668 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=2 ttl=63 time=2.19 ms 2026-01-13 01:36:51.904661 | orchestrator | 64 bytes from 192.168.112.189: icmp_seq=3 ttl=63 time=1.76 ms 2026-01-13 01:36:51.904746 | orchestrator | 2026-01-13 01:36:51.904757 | orchestrator | --- 192.168.112.189 ping statistics --- 2026-01-13 01:36:51.904766 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-13 01:36:51.904773 | orchestrator | rtt min/avg/max/mdev = 1.763/2.782/4.391/1.151 ms 2026-01-13 01:36:51.904780 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-13 01:36:51.904787 | orchestrator | + ping -c3 192.168.112.133 2026-01-13 01:36:51.914688 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-01-13 01:36:51.914757 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=4.42 ms 2026-01-13 01:36:52.913716 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=1.78 ms 2026-01-13 01:36:53.916397 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.83 ms 2026-01-13 01:36:53.916501 | orchestrator | 2026-01-13 01:36:53.916512 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-01-13 01:36:53.916521 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-13 01:36:53.916529 | orchestrator | rtt min/avg/max/mdev = 1.776/2.674/4.415/1.230 ms 2026-01-13 01:36:54.067518 | orchestrator | ok: Runtime: 0:21:26.306938 2026-01-13 01:36:54.117292 | 2026-01-13 01:36:54.117432 | TASK [Run tempest] 2026-01-13 01:36:54.885856 | orchestrator | 2026-01-13 01:36:54.886125 | orchestrator | # Tempest 2026-01-13 01:36:54.886143 | orchestrator | 2026-01-13 01:36:54.886152 | orchestrator | + set -e 2026-01-13 01:36:54.886160 | orchestrator | + echo 2026-01-13 01:36:54.886169 | orchestrator | + echo '# Tempest' 2026-01-13 01:36:54.886180 | orchestrator | + echo 2026-01-13 01:36:54.886213 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-01-13 01:37:07.156945 | orchestrator | 2026-01-13 01:37:07 | INFO  | Task a9596858-7eea-4afc-9fd3-2a6010a5a83a (tempest) was prepared for execution. 2026-01-13 01:37:07.157039 | orchestrator | 2026-01-13 01:37:07 | INFO  | It takes a moment until task a9596858-7eea-4afc-9fd3-2a6010a5a83a (tempest) has been started and output is visible here. 2026-01-13 01:38:26.510291 | orchestrator | 2026-01-13 01:38:26.510391 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-01-13 01:38:26.510408 | orchestrator | 2026-01-13 01:38:26.510420 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-01-13 01:38:26.510432 | orchestrator | Tuesday 13 January 2026 01:37:11 +0000 (0:00:00.239) 0:00:00.239 ******* 2026-01-13 01:38:26.510436 | orchestrator | changed: [testbed-manager] 2026-01-13 01:38:26.510442 | orchestrator | 2026-01-13 01:38:26.510446 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-01-13 01:38:26.510450 | orchestrator | Tuesday 13 January 2026 01:37:12 +0000 (0:00:00.710) 0:00:00.949 ******* 2026-01-13 01:38:26.510454 | orchestrator | changed: [testbed-manager] 2026-01-13 01:38:26.510458 | orchestrator | 2026-01-13 01:38:26.510470 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-01-13 01:38:26.510474 | orchestrator | Tuesday 13 January 2026 01:37:13 +0000 (0:00:01.216) 0:00:02.166 ******* 2026-01-13 01:38:26.510479 | orchestrator | ok: [testbed-manager] 2026-01-13 01:38:26.510484 | orchestrator | 2026-01-13 01:38:26.510488 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-01-13 01:38:26.510492 | orchestrator | Tuesday 13 January 2026 01:37:13 +0000 (0:00:00.428) 0:00:02.594 ******* 2026-01-13 01:38:26.510496 | orchestrator | changed: [testbed-manager] 2026-01-13 01:38:26.510500 | orchestrator | 2026-01-13 01:38:26.510503 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-01-13 01:38:26.510507 | orchestrator | Tuesday 13 January 2026 01:37:35 +0000 (0:00:21.649) 0:00:24.244 ******* 2026-01-13 01:38:26.510511 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-01-13 01:38:26.510516 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-01-13 01:38:26.510520 | orchestrator | 2026-01-13 01:38:26.510524 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-01-13 01:38:26.510528 | orchestrator | Tuesday 13 January 2026 01:37:44 +0000 (0:00:08.742) 0:00:32.987 ******* 2026-01-13 01:38:26.510532 | orchestrator | ok: [testbed-manager] => { 2026-01-13 01:38:26.510536 | orchestrator |  "changed": false, 2026-01-13 01:38:26.510540 | orchestrator |  "msg": "All assertions passed" 2026-01-13 01:38:26.510544 | orchestrator | } 2026-01-13 01:38:26.510548 | orchestrator | 2026-01-13 01:38:26.510552 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-01-13 01:38:26.510556 | orchestrator | Tuesday 13 January 2026 01:37:44 +0000 (0:00:00.156) 0:00:33.143 ******* 2026-01-13 01:38:26.510560 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 01:38:26.510563 | orchestrator | 2026-01-13 01:38:26.510567 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-01-13 01:38:26.510571 | orchestrator | Tuesday 13 January 2026 01:37:48 +0000 (0:00:03.724) 0:00:36.867 ******* 2026-01-13 01:38:26.510575 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 01:38:26.510579 | orchestrator | 2026-01-13 01:38:26.510583 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-01-13 01:38:26.510587 | orchestrator | Tuesday 13 January 2026 01:37:49 +0000 (0:00:01.663) 0:00:38.531 ******* 2026-01-13 01:38:26.510591 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 01:38:26.510594 | orchestrator | 2026-01-13 01:38:26.510598 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-01-13 01:38:26.510678 | orchestrator | Tuesday 13 January 2026 01:37:53 +0000 (0:00:03.622) 0:00:42.154 ******* 2026-01-13 01:38:26.510684 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 01:38:26.510688 | orchestrator | 2026-01-13 01:38:26.510692 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-01-13 01:38:26.510696 | orchestrator | Tuesday 13 January 2026 01:37:53 +0000 (0:00:00.189) 0:00:42.343 ******* 2026-01-13 01:38:26.510700 | orchestrator | changed: [testbed-manager] 2026-01-13 01:38:26.510704 | orchestrator | 2026-01-13 01:38:26.510707 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-01-13 01:38:26.510711 | orchestrator | Tuesday 13 January 2026 01:37:56 +0000 (0:00:02.579) 0:00:44.922 ******* 2026-01-13 01:38:26.510715 | orchestrator | changed: [testbed-manager] 2026-01-13 01:38:26.510719 | orchestrator | 2026-01-13 01:38:26.510723 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-01-13 01:38:26.510727 | orchestrator | Tuesday 13 January 2026 01:38:06 +0000 (0:00:10.450) 0:00:55.373 ******* 2026-01-13 01:38:26.510730 | orchestrator | changed: [testbed-manager] 2026-01-13 01:38:26.510734 | orchestrator | 2026-01-13 01:38:26.510738 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-01-13 01:38:26.510742 | orchestrator | Tuesday 13 January 2026 01:38:07 +0000 (0:00:00.767) 0:00:56.141 ******* 2026-01-13 01:38:26.510746 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 01:38:26.510749 | orchestrator | 2026-01-13 01:38:26.510753 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-01-13 01:38:26.510757 | orchestrator | Tuesday 13 January 2026 01:38:09 +0000 (0:00:01.501) 0:00:57.642 ******* 2026-01-13 01:38:26.510761 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 01:38:26.510764 | orchestrator | 2026-01-13 01:38:26.510768 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-01-13 01:38:26.510772 | orchestrator | Tuesday 13 January 2026 01:38:10 +0000 (0:00:01.543) 0:00:59.186 ******* 2026-01-13 01:38:26.510776 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 01:38:26.510780 | orchestrator | 2026-01-13 01:38:26.510783 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-01-13 01:38:26.510787 | orchestrator | Tuesday 13 January 2026 01:38:10 +0000 (0:00:00.188) 0:00:59.375 ******* 2026-01-13 01:38:26.510791 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 01:38:26.510795 | orchestrator | 2026-01-13 01:38:26.510799 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-01-13 01:38:26.510802 | orchestrator | Tuesday 13 January 2026 01:38:10 +0000 (0:00:00.180) 0:00:59.556 ******* 2026-01-13 01:38:26.510806 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-13 01:38:26.510810 | orchestrator | 2026-01-13 01:38:26.510814 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-01-13 01:38:26.510833 | orchestrator | Tuesday 13 January 2026 01:38:14 +0000 (0:00:03.761) 0:01:03.317 ******* 2026-01-13 01:38:26.510838 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-01-13 01:38:26.510842 | orchestrator |  "changed": false, 2026-01-13 01:38:26.510846 | orchestrator |  "msg": "All assertions passed" 2026-01-13 01:38:26.510850 | orchestrator | } 2026-01-13 01:38:26.510854 | orchestrator | 2026-01-13 01:38:26.510858 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-01-13 01:38:26.510861 | orchestrator | Tuesday 13 January 2026 01:38:14 +0000 (0:00:00.198) 0:01:03.515 ******* 2026-01-13 01:38:26.510865 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-01-13 01:38:26.510873 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-01-13 01:38:26.510877 | orchestrator | skipping: [testbed-manager] 2026-01-13 01:38:26.510881 | orchestrator | 2026-01-13 01:38:26.510885 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-01-13 01:38:26.510889 | orchestrator | Tuesday 13 January 2026 01:38:15 +0000 (0:00:00.443) 0:01:03.958 ******* 2026-01-13 01:38:26.510897 | orchestrator | skipping: [testbed-manager] 2026-01-13 01:38:26.510901 | orchestrator | 2026-01-13 01:38:26.510904 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-01-13 01:38:26.510908 | orchestrator | Tuesday 13 January 2026 01:38:15 +0000 (0:00:00.154) 0:01:04.113 ******* 2026-01-13 01:38:26.510912 | orchestrator | ok: [testbed-manager] 2026-01-13 01:38:26.510916 | orchestrator | 2026-01-13 01:38:26.510919 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-01-13 01:38:26.510923 | orchestrator | Tuesday 13 January 2026 01:38:16 +0000 (0:00:00.531) 0:01:04.644 ******* 2026-01-13 01:38:26.510927 | orchestrator | changed: [testbed-manager] 2026-01-13 01:38:26.510931 | orchestrator | 2026-01-13 01:38:26.510935 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-01-13 01:38:26.510938 | orchestrator | Tuesday 13 January 2026 01:38:16 +0000 (0:00:00.905) 0:01:05.550 ******* 2026-01-13 01:38:26.510942 | orchestrator | ok: [testbed-manager] 2026-01-13 01:38:26.510946 | orchestrator | 2026-01-13 01:38:26.510950 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-01-13 01:38:26.510953 | orchestrator | Tuesday 13 January 2026 01:38:17 +0000 (0:00:00.467) 0:01:06.017 ******* 2026-01-13 01:38:26.510957 | orchestrator | skipping: [testbed-manager] 2026-01-13 01:38:26.510961 | orchestrator | 2026-01-13 01:38:26.510965 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-01-13 01:38:26.510969 | orchestrator | Tuesday 13 January 2026 01:38:17 +0000 (0:00:00.158) 0:01:06.176 ******* 2026-01-13 01:38:26.510972 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-01-13 01:38:26.510977 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-01-13 01:38:26.510981 | orchestrator | 2026-01-13 01:38:26.510985 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-01-13 01:38:26.510988 | orchestrator | Tuesday 13 January 2026 01:38:25 +0000 (0:00:07.795) 0:01:13.971 ******* 2026-01-13 01:38:26.510992 | orchestrator | changed: [testbed-manager] 2026-01-13 01:38:26.510996 | orchestrator | 2026-01-13 01:38:26.511000 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-13 01:38:26.511004 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-13 01:38:26.511009 | orchestrator | 2026-01-13 01:38:26.511013 | orchestrator | 2026-01-13 01:38:26.511017 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-13 01:38:26.511020 | orchestrator | Tuesday 13 January 2026 01:38:26 +0000 (0:00:01.117) 0:01:15.089 ******* 2026-01-13 01:38:26.511024 | orchestrator | =============================================================================== 2026-01-13 01:38:26.511028 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 21.65s 2026-01-13 01:38:26.511032 | orchestrator | osism.validations.tempest : Install qemu-utils package ----------------- 10.45s 2026-01-13 01:38:26.511036 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.74s 2026-01-13 01:38:26.511040 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.80s 2026-01-13 01:38:26.511043 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.76s 2026-01-13 01:38:26.511047 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.72s 2026-01-13 01:38:26.511051 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.62s 2026-01-13 01:38:26.511055 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.58s 2026-01-13 01:38:26.511059 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.66s 2026-01-13 01:38:26.511062 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.54s 2026-01-13 01:38:26.511066 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.50s 2026-01-13 01:38:26.511073 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.22s 2026-01-13 01:38:26.511077 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.12s 2026-01-13 01:38:26.511081 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.91s 2026-01-13 01:38:26.511088 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.77s 2026-01-13 01:38:26.511092 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 0.71s 2026-01-13 01:38:26.511096 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.53s 2026-01-13 01:38:26.511103 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.47s 2026-01-13 01:38:26.993118 | orchestrator | osism.validations.tempest : Resolve flavor IDs -------------------------- 0.44s 2026-01-13 01:38:26.993199 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.43s 2026-01-13 01:38:27.410422 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-01-13 01:38:27.413872 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-01-13 01:38:27.418162 | orchestrator | 2026-01-13 01:38:27.418254 | orchestrator | ## IDENTITY (API) 2026-01-13 01:38:27.418265 | orchestrator | 2026-01-13 01:38:27.418272 | orchestrator | + echo 2026-01-13 01:38:27.418278 | orchestrator | + echo '## IDENTITY (API)' 2026-01-13 01:38:27.418284 | orchestrator | + echo 2026-01-13 01:38:27.418290 | orchestrator | + _tempest tempest.api.identity.v3 2026-01-13 01:38:27.418298 | orchestrator | + local regex=tempest.api.identity.v3 2026-01-13 01:38:27.418556 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-01-13 01:38:27.419649 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-13 01:38:27.423313 | orchestrator | + tee -a /opt/tempest/20260113-0138.log 2026-01-13 01:38:31.932946 | orchestrator | 2026-01-13 01:38:31.936 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-13 01:38:32.035738 | orchestrator | 2026-01-13 01:38:32.039 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-13 01:38:32.035849 | orchestrator | 2026-01-13 01:38:32.039 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-13 01:38:32.035861 | orchestrator | 2026-01-13 01:38:32.040 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-13 01:38:32.035869 | orchestrator | 2026-01-13 01:38:32.040 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:38:32.036085 | orchestrator | 2026-01-13 01:38:32.040 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-13 01:38:32.036109 | orchestrator | 2026-01-13 01:38:32.040 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-13 01:38:32.036117 | orchestrator | 2026-01-13 01:38:32.041 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-13 01:38:32.036353 | orchestrator | 2026-01-13 01:38:32.041 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-13 01:38:32.036378 | orchestrator | 2026-01-13 01:38:32.041 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-13 01:38:32.036954 | orchestrator | 2026-01-13 01:38:32.041 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-13 01:38:32.037010 | orchestrator | 2026-01-13 01:38:32.042 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-13 01:38:32.037757 | orchestrator | 2026-01-13 01:38:32.042 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-13 01:38:32.037846 | orchestrator | 2026-01-13 01:38:32.042 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-13 01:38:32.037855 | orchestrator | 2026-01-13 01:38:32.042 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-13 01:38:32.037859 | orchestrator | 2026-01-13 01:38:32.042 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-13 01:38:32.038011 | orchestrator | 2026-01-13 01:38:32.043 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-13 01:38:32.038041 | orchestrator | 2026-01-13 01:38:32.043 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-13 01:38:32.038078 | orchestrator | 2026-01-13 01:38:32.043 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-13 01:38:32.038236 | orchestrator | 2026-01-13 01:38:32.043 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-13 01:38:32.038244 | orchestrator | 2026-01-13 01:38:32.043 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-13 01:38:32.038404 | orchestrator | 2026-01-13 01:38:32.043 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-13 01:38:32.038410 | orchestrator | 2026-01-13 01:38:32.043 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-13 01:38:46.160255 | orchestrator | 2026-01-13 01:38:46.160348 | orchestrator | ========================= 2026-01-13 01:38:46.160362 | orchestrator | Failures during discovery 2026-01-13 01:38:46.160387 | orchestrator | ========================= 2026-01-13 01:38:46.160394 | orchestrator | --- stdout --- 2026-01-13 01:38:46.160402 | orchestrator | 2026-01-13 01:38:35.858 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-13 01:38:46.160410 | orchestrator | 2026-01-13 01:38:35.860 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-13 01:38:46.160418 | orchestrator | 2026-01-13 01:38:35.860 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-13 01:38:46.160425 | orchestrator | 2026-01-13 01:38:35.860 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-13 01:38:46.160431 | orchestrator | 2026-01-13 01:38:35.860 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:38:46.160438 | orchestrator | 2026-01-13 01:38:35.861 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-13 01:38:46.160445 | orchestrator | 2026-01-13 01:38:35.861 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-13 01:38:46.160453 | orchestrator | 2026-01-13 01:38:35.861 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-13 01:38:46.160459 | orchestrator | 2026-01-13 01:38:35.861 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-13 01:38:46.160465 | orchestrator | 2026-01-13 01:38:35.861 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-13 01:38:46.160473 | orchestrator | 2026-01-13 01:38:35.862 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-13 01:38:46.160477 | orchestrator | 2026-01-13 01:38:35.862 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-13 01:38:46.160481 | orchestrator | 2026-01-13 01:38:35.862 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-13 01:38:46.160499 | orchestrator | 2026-01-13 01:38:35.862 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-13 01:38:46.160503 | orchestrator | 2026-01-13 01:38:35.863 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-13 01:38:46.160507 | orchestrator | 2026-01-13 01:38:35.863 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-13 01:38:46.160512 | orchestrator | 2026-01-13 01:38:35.863 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-13 01:38:46.160516 | orchestrator | 2026-01-13 01:38:35.863 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-13 01:38:46.160520 | orchestrator | 2026-01-13 01:38:35.863 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-13 01:38:46.160524 | orchestrator | 2026-01-13 01:38:35.863 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-13 01:38:46.160533 | orchestrator | 2026-01-13 01:38:35.863 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-13 01:38:46.160537 | orchestrator | 2026-01-13 01:38:35.863 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-13 01:38:46.160541 | orchestrator | 2026-01-13 01:38:35.863 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-13 01:38:46.160551 | orchestrator | 2026-01-13 01:38:35.866 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-13 01:38:46.160559 | orchestrator | 2026-01-13 01:38:36.714 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-13 01:38:46.160563 | orchestrator | 2026-01-13 01:38:36.714 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-13 01:38:46.160567 | orchestrator | 2026-01-13 01:38:36.714 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-13 01:38:46.160571 | orchestrator | 2026-01-13 01:38:36.714 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:38:46.160587 | orchestrator | 2026-01-13 01:38:36.714 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-13 01:38:46.160591 | orchestrator | 2026-01-13 01:38:36.714 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-13 01:38:46.160595 | orchestrator | 2026-01-13 01:38:36.714 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-13 01:38:46.160599 | orchestrator | 2026-01-13 01:38:36.714 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-13 01:38:46.160603 | orchestrator | 2026-01-13 01:38:36.714 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-13 01:38:46.160606 | orchestrator | 2026-01-13 01:38:36.714 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-13 01:38:46.160610 | orchestrator | 2026-01-13 01:38:36.714 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-13 01:38:46.160614 | orchestrator | --- import errors --- 2026-01-13 01:38:46.160619 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-13 01:38:46.160623 | orchestrator | Traceback (most recent call last): 2026-01-13 01:38:46.160628 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-13 01:38:46.160632 | orchestrator | module = self._get_module_from_name(name) 2026-01-13 01:38:46.160638 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-13 01:38:46.160649 | orchestrator | __import__(name) 2026-01-13 01:38:46.160655 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-13 01:38:46.160662 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-13 01:38:46.160668 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-13 01:38:46.160674 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-13 01:38:46.160680 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-13 01:38:46.160686 | orchestrator | 2026-01-13 01:38:46.160693 | orchestrator | ================================================================================ 2026-01-13 01:38:46.160699 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-13 01:38:46.515790 | orchestrator | 2026-01-13 01:38:46.515861 | orchestrator | ## IMAGE (API) 2026-01-13 01:38:46.515866 | orchestrator | 2026-01-13 01:38:46.515871 | orchestrator | + echo 2026-01-13 01:38:46.515875 | orchestrator | + echo '## IMAGE (API)' 2026-01-13 01:38:46.515887 | orchestrator | + echo 2026-01-13 01:38:46.515895 | orchestrator | + _tempest tempest.api.image.v2 2026-01-13 01:38:46.515904 | orchestrator | + local regex=tempest.api.image.v2 2026-01-13 01:38:46.516205 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-01-13 01:38:46.516444 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-13 01:38:46.519389 | orchestrator | + tee -a /opt/tempest/20260113-0138.log 2026-01-13 01:38:50.043497 | orchestrator | 2026-01-13 01:38:50.047 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-13 01:38:50.127936 | orchestrator | 2026-01-13 01:38:50.132 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-13 01:38:50.128173 | orchestrator | 2026-01-13 01:38:50.132 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-13 01:38:50.128188 | orchestrator | 2026-01-13 01:38:50.132 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-13 01:38:50.128195 | orchestrator | 2026-01-13 01:38:50.132 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:38:50.128503 | orchestrator | 2026-01-13 01:38:50.133 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-13 01:38:50.128702 | orchestrator | 2026-01-13 01:38:50.133 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-13 01:38:50.128887 | orchestrator | 2026-01-13 01:38:50.133 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-13 01:38:50.128947 | orchestrator | 2026-01-13 01:38:50.134 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-13 01:38:50.131716 | orchestrator | 2026-01-13 01:38:50.134 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-13 01:38:50.131799 | orchestrator | 2026-01-13 01:38:50.134 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-13 01:38:50.131809 | orchestrator | 2026-01-13 01:38:50.134 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-13 01:38:50.131816 | orchestrator | 2026-01-13 01:38:50.135 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-13 01:38:50.131823 | orchestrator | 2026-01-13 01:38:50.135 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-13 01:38:50.131830 | orchestrator | 2026-01-13 01:38:50.135 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-13 01:38:50.131856 | orchestrator | 2026-01-13 01:38:50.135 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-13 01:38:50.131865 | orchestrator | 2026-01-13 01:38:50.135 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-13 01:38:50.131871 | orchestrator | 2026-01-13 01:38:50.135 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-13 01:38:50.131877 | orchestrator | 2026-01-13 01:38:50.135 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-13 01:38:50.131884 | orchestrator | 2026-01-13 01:38:50.135 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-13 01:38:50.131891 | orchestrator | 2026-01-13 01:38:50.135 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-13 01:38:50.131897 | orchestrator | 2026-01-13 01:38:50.135 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-13 01:38:50.131903 | orchestrator | 2026-01-13 01:38:50.135 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-13 01:39:04.045101 | orchestrator | 2026-01-13 01:39:04.045271 | orchestrator | ========================= 2026-01-13 01:39:04.045284 | orchestrator | Failures during discovery 2026-01-13 01:39:04.045289 | orchestrator | ========================= 2026-01-13 01:39:04.045293 | orchestrator | --- stdout --- 2026-01-13 01:39:04.045299 | orchestrator | 2026-01-13 01:38:53.834 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-13 01:39:04.045307 | orchestrator | 2026-01-13 01:38:53.836 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-13 01:39:04.045313 | orchestrator | 2026-01-13 01:38:53.836 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-13 01:39:04.045318 | orchestrator | 2026-01-13 01:38:53.836 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-13 01:39:04.045322 | orchestrator | 2026-01-13 01:38:53.836 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:04.045327 | orchestrator | 2026-01-13 01:38:53.837 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-13 01:39:04.045331 | orchestrator | 2026-01-13 01:38:53.837 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-13 01:39:04.045335 | orchestrator | 2026-01-13 01:38:53.837 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-13 01:39:04.045339 | orchestrator | 2026-01-13 01:38:53.837 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-13 01:39:04.045343 | orchestrator | 2026-01-13 01:38:53.837 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-13 01:39:04.045347 | orchestrator | 2026-01-13 01:38:53.838 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-13 01:39:04.045350 | orchestrator | 2026-01-13 01:38:53.838 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-13 01:39:04.045354 | orchestrator | 2026-01-13 01:38:53.838 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-13 01:39:04.045358 | orchestrator | 2026-01-13 01:38:53.839 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-13 01:39:04.045362 | orchestrator | 2026-01-13 01:38:53.839 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-13 01:39:04.045379 | orchestrator | 2026-01-13 01:38:53.839 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:04.045384 | orchestrator | 2026-01-13 01:38:53.839 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-13 01:39:04.045388 | orchestrator | 2026-01-13 01:38:53.839 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-13 01:39:04.045392 | orchestrator | 2026-01-13 01:38:53.839 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-13 01:39:04.045395 | orchestrator | 2026-01-13 01:38:53.839 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-13 01:39:04.045399 | orchestrator | 2026-01-13 01:38:53.839 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-13 01:39:04.045403 | orchestrator | 2026-01-13 01:38:53.839 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-13 01:39:04.045413 | orchestrator | 2026-01-13 01:38:53.839 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-13 01:39:04.045419 | orchestrator | 2026-01-13 01:38:53.842 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-13 01:39:04.045425 | orchestrator | 2026-01-13 01:38:54.648 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-13 01:39:04.045429 | orchestrator | 2026-01-13 01:38:54.648 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-13 01:39:04.045433 | orchestrator | 2026-01-13 01:38:54.648 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-13 01:39:04.045437 | orchestrator | 2026-01-13 01:38:54.649 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:04.045452 | orchestrator | 2026-01-13 01:38:54.649 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-13 01:39:04.045456 | orchestrator | 2026-01-13 01:38:54.649 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-13 01:39:04.045460 | orchestrator | 2026-01-13 01:38:54.649 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-13 01:39:04.045463 | orchestrator | 2026-01-13 01:38:54.649 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-13 01:39:04.045467 | orchestrator | 2026-01-13 01:38:54.649 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-13 01:39:04.045471 | orchestrator | 2026-01-13 01:38:54.649 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-13 01:39:04.045475 | orchestrator | 2026-01-13 01:38:54.649 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-13 01:39:04.045479 | orchestrator | --- import errors --- 2026-01-13 01:39:04.045483 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-13 01:39:04.045488 | orchestrator | Traceback (most recent call last): 2026-01-13 01:39:04.045492 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-13 01:39:04.045496 | orchestrator | module = self._get_module_from_name(name) 2026-01-13 01:39:04.045500 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-13 01:39:04.045505 | orchestrator | __import__(name) 2026-01-13 01:39:04.045508 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-13 01:39:04.045512 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-13 01:39:04.045516 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-13 01:39:04.045524 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-13 01:39:04.045530 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-13 01:39:04.045537 | orchestrator | 2026-01-13 01:39:04.045542 | orchestrator | ================================================================================ 2026-01-13 01:39:04.045552 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-13 01:39:04.398463 | orchestrator | 2026-01-13 01:39:04.398558 | orchestrator | ## NETWORK (API) 2026-01-13 01:39:04.398567 | orchestrator | 2026-01-13 01:39:04.398574 | orchestrator | + echo 2026-01-13 01:39:04.398580 | orchestrator | + echo '## NETWORK (API)' 2026-01-13 01:39:04.398588 | orchestrator | + echo 2026-01-13 01:39:04.398595 | orchestrator | + _tempest tempest.api.network 2026-01-13 01:39:04.398601 | orchestrator | + local regex=tempest.api.network 2026-01-13 01:39:04.399362 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-01-13 01:39:04.399736 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-13 01:39:04.405249 | orchestrator | + tee -a /opt/tempest/20260113-0139.log 2026-01-13 01:39:08.172457 | orchestrator | 2026-01-13 01:39:08.175 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-13 01:39:08.277011 | orchestrator | 2026-01-13 01:39:08.279 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-13 01:39:08.277096 | orchestrator | 2026-01-13 01:39:08.279 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-13 01:39:08.277104 | orchestrator | 2026-01-13 01:39:08.280 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-13 01:39:08.277108 | orchestrator | 2026-01-13 01:39:08.280 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:08.277113 | orchestrator | 2026-01-13 01:39:08.280 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-13 01:39:08.277118 | orchestrator | 2026-01-13 01:39:08.280 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-13 01:39:08.277122 | orchestrator | 2026-01-13 01:39:08.280 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-13 01:39:08.277125 | orchestrator | 2026-01-13 01:39:08.281 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-13 01:39:08.277138 | orchestrator | 2026-01-13 01:39:08.281 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-13 01:39:08.277144 | orchestrator | 2026-01-13 01:39:08.281 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-13 01:39:08.277823 | orchestrator | 2026-01-13 01:39:08.281 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-13 01:39:08.277948 | orchestrator | 2026-01-13 01:39:08.282 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-13 01:39:08.277961 | orchestrator | 2026-01-13 01:39:08.282 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-13 01:39:08.277967 | orchestrator | 2026-01-13 01:39:08.282 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-13 01:39:08.277973 | orchestrator | 2026-01-13 01:39:08.282 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:08.277981 | orchestrator | 2026-01-13 01:39:08.282 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-13 01:39:08.278010 | orchestrator | 2026-01-13 01:39:08.282 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-13 01:39:08.278594 | orchestrator | 2026-01-13 01:39:08.282 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-13 01:39:08.278643 | orchestrator | 2026-01-13 01:39:08.282 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-13 01:39:08.278649 | orchestrator | 2026-01-13 01:39:08.282 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-13 01:39:08.278659 | orchestrator | 2026-01-13 01:39:08.282 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-13 01:39:08.278663 | orchestrator | 2026-01-13 01:39:08.282 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-13 01:39:22.247590 | orchestrator | 2026-01-13 01:39:22.247696 | orchestrator | ========================= 2026-01-13 01:39:22.247709 | orchestrator | Failures during discovery 2026-01-13 01:39:22.247716 | orchestrator | ========================= 2026-01-13 01:39:22.247723 | orchestrator | --- stdout --- 2026-01-13 01:39:22.247733 | orchestrator | 2026-01-13 01:39:11.922 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-13 01:39:22.247741 | orchestrator | 2026-01-13 01:39:11.924 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-13 01:39:22.247749 | orchestrator | 2026-01-13 01:39:11.924 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-13 01:39:22.247756 | orchestrator | 2026-01-13 01:39:11.924 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-13 01:39:22.247780 | orchestrator | 2026-01-13 01:39:11.924 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:22.247787 | orchestrator | 2026-01-13 01:39:11.925 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-13 01:39:22.247793 | orchestrator | 2026-01-13 01:39:11.925 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-13 01:39:22.247800 | orchestrator | 2026-01-13 01:39:11.925 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-13 01:39:22.247807 | orchestrator | 2026-01-13 01:39:11.925 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-13 01:39:22.247813 | orchestrator | 2026-01-13 01:39:11.925 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-13 01:39:22.247820 | orchestrator | 2026-01-13 01:39:11.926 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-13 01:39:22.247826 | orchestrator | 2026-01-13 01:39:11.926 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-13 01:39:22.247832 | orchestrator | 2026-01-13 01:39:11.926 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-13 01:39:22.247838 | orchestrator | 2026-01-13 01:39:11.926 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-13 01:39:22.247845 | orchestrator | 2026-01-13 01:39:11.926 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-13 01:39:22.247851 | orchestrator | 2026-01-13 01:39:11.927 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:22.247857 | orchestrator | 2026-01-13 01:39:11.927 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-13 01:39:22.247893 | orchestrator | 2026-01-13 01:39:11.927 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-13 01:39:22.247900 | orchestrator | 2026-01-13 01:39:11.927 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-13 01:39:22.247907 | orchestrator | 2026-01-13 01:39:11.927 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-13 01:39:22.247913 | orchestrator | 2026-01-13 01:39:11.927 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-13 01:39:22.247919 | orchestrator | 2026-01-13 01:39:11.927 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-13 01:39:22.247926 | orchestrator | 2026-01-13 01:39:11.927 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-13 01:39:22.247935 | orchestrator | 2026-01-13 01:39:11.929 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-13 01:39:22.247943 | orchestrator | 2026-01-13 01:39:12.741 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-13 01:39:22.247950 | orchestrator | 2026-01-13 01:39:12.741 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-13 01:39:22.247956 | orchestrator | 2026-01-13 01:39:12.741 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-13 01:39:22.248009 | orchestrator | 2026-01-13 01:39:12.741 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:22.248034 | orchestrator | 2026-01-13 01:39:12.741 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-13 01:39:22.248041 | orchestrator | 2026-01-13 01:39:12.741 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-13 01:39:22.248048 | orchestrator | 2026-01-13 01:39:12.742 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-13 01:39:22.248054 | orchestrator | 2026-01-13 01:39:12.742 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-13 01:39:22.248060 | orchestrator | 2026-01-13 01:39:12.742 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-13 01:39:22.248066 | orchestrator | 2026-01-13 01:39:12.742 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-13 01:39:22.248078 | orchestrator | 2026-01-13 01:39:12.742 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-13 01:39:22.248085 | orchestrator | --- import errors --- 2026-01-13 01:39:22.248092 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-13 01:39:22.248099 | orchestrator | Traceback (most recent call last): 2026-01-13 01:39:22.248106 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-13 01:39:22.248113 | orchestrator | module = self._get_module_from_name(name) 2026-01-13 01:39:22.248119 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-13 01:39:22.248126 | orchestrator | __import__(name) 2026-01-13 01:39:22.248132 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-13 01:39:22.248138 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-13 01:39:22.248145 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-13 01:39:22.248151 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-13 01:39:22.248158 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-13 01:39:22.248164 | orchestrator | 2026-01-13 01:39:22.248171 | orchestrator | ================================================================================ 2026-01-13 01:39:22.248184 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-13 01:39:22.630282 | orchestrator | 2026-01-13 01:39:22.630354 | orchestrator | ## VOLUME (API) 2026-01-13 01:39:22.630360 | orchestrator | 2026-01-13 01:39:22.630365 | orchestrator | + echo 2026-01-13 01:39:22.630370 | orchestrator | + echo '## VOLUME (API)' 2026-01-13 01:39:22.630375 | orchestrator | + echo 2026-01-13 01:39:22.630379 | orchestrator | + _tempest tempest.api.volume 2026-01-13 01:39:22.630383 | orchestrator | + local regex=tempest.api.volume 2026-01-13 01:39:22.630639 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-01-13 01:39:22.632955 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-13 01:39:22.634834 | orchestrator | + tee -a /opt/tempest/20260113-0139.log 2026-01-13 01:39:26.391293 | orchestrator | 2026-01-13 01:39:26.394 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-13 01:39:26.497676 | orchestrator | 2026-01-13 01:39:26.501 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-13 01:39:26.497740 | orchestrator | 2026-01-13 01:39:26.501 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-13 01:39:26.497749 | orchestrator | 2026-01-13 01:39:26.502 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-13 01:39:26.497760 | orchestrator | 2026-01-13 01:39:26.502 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:26.498207 | orchestrator | 2026-01-13 01:39:26.502 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-13 01:39:26.498400 | orchestrator | 2026-01-13 01:39:26.502 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-13 01:39:26.498417 | orchestrator | 2026-01-13 01:39:26.503 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-13 01:39:26.498947 | orchestrator | 2026-01-13 01:39:26.503 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-13 01:39:26.499058 | orchestrator | 2026-01-13 01:39:26.503 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-13 01:39:26.499685 | orchestrator | 2026-01-13 01:39:26.504 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-13 01:39:26.499756 | orchestrator | 2026-01-13 01:39:26.504 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-13 01:39:26.500487 | orchestrator | 2026-01-13 01:39:26.505 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-13 01:39:26.500519 | orchestrator | 2026-01-13 01:39:26.505 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-13 01:39:26.500621 | orchestrator | 2026-01-13 01:39:26.505 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-13 01:39:26.500631 | orchestrator | 2026-01-13 01:39:26.505 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:26.500805 | orchestrator | 2026-01-13 01:39:26.505 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-13 01:39:26.500812 | orchestrator | 2026-01-13 01:39:26.505 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-13 01:39:26.501087 | orchestrator | 2026-01-13 01:39:26.505 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-13 01:39:26.501453 | orchestrator | 2026-01-13 01:39:26.505 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-13 01:39:26.501472 | orchestrator | 2026-01-13 01:39:26.506 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-13 01:39:26.501476 | orchestrator | 2026-01-13 01:39:26.506 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-13 01:39:26.501489 | orchestrator | 2026-01-13 01:39:26.506 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-13 01:39:40.616965 | orchestrator | 2026-01-13 01:39:40.617142 | orchestrator | ========================= 2026-01-13 01:39:40.617156 | orchestrator | Failures during discovery 2026-01-13 01:39:40.617162 | orchestrator | ========================= 2026-01-13 01:39:40.617179 | orchestrator | --- stdout --- 2026-01-13 01:39:40.617226 | orchestrator | 2026-01-13 01:39:30.195 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-13 01:39:40.617235 | orchestrator | 2026-01-13 01:39:30.197 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-13 01:39:40.617243 | orchestrator | 2026-01-13 01:39:30.197 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-13 01:39:40.617248 | orchestrator | 2026-01-13 01:39:30.197 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-13 01:39:40.617254 | orchestrator | 2026-01-13 01:39:30.198 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:40.617259 | orchestrator | 2026-01-13 01:39:30.198 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-13 01:39:40.617264 | orchestrator | 2026-01-13 01:39:30.198 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-13 01:39:40.617269 | orchestrator | 2026-01-13 01:39:30.198 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-13 01:39:40.617274 | orchestrator | 2026-01-13 01:39:30.198 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-13 01:39:40.617278 | orchestrator | 2026-01-13 01:39:30.199 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-13 01:39:40.617283 | orchestrator | 2026-01-13 01:39:30.199 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-13 01:39:40.617288 | orchestrator | 2026-01-13 01:39:30.199 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-13 01:39:40.617293 | orchestrator | 2026-01-13 01:39:30.200 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-13 01:39:40.617298 | orchestrator | 2026-01-13 01:39:30.200 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-13 01:39:40.617303 | orchestrator | 2026-01-13 01:39:30.200 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-13 01:39:40.617308 | orchestrator | 2026-01-13 01:39:30.200 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:40.617314 | orchestrator | 2026-01-13 01:39:30.200 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-13 01:39:40.617318 | orchestrator | 2026-01-13 01:39:30.200 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-13 01:39:40.617323 | orchestrator | 2026-01-13 01:39:30.200 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-13 01:39:40.617346 | orchestrator | 2026-01-13 01:39:30.200 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-13 01:39:40.617351 | orchestrator | 2026-01-13 01:39:30.200 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-13 01:39:40.617356 | orchestrator | 2026-01-13 01:39:30.200 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-13 01:39:40.617360 | orchestrator | 2026-01-13 01:39:30.200 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-13 01:39:40.617367 | orchestrator | 2026-01-13 01:39:30.203 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-13 01:39:40.617373 | orchestrator | 2026-01-13 01:39:31.084 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-13 01:39:40.617378 | orchestrator | 2026-01-13 01:39:31.084 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-13 01:39:40.617383 | orchestrator | 2026-01-13 01:39:31.084 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-13 01:39:40.617388 | orchestrator | 2026-01-13 01:39:31.084 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:40.617415 | orchestrator | 2026-01-13 01:39:31.084 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-13 01:39:40.617426 | orchestrator | 2026-01-13 01:39:31.084 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-13 01:39:40.617431 | orchestrator | 2026-01-13 01:39:31.084 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-13 01:39:40.617436 | orchestrator | 2026-01-13 01:39:31.085 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-13 01:39:40.617440 | orchestrator | 2026-01-13 01:39:31.085 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-13 01:39:40.617445 | orchestrator | 2026-01-13 01:39:31.085 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-13 01:39:40.617449 | orchestrator | 2026-01-13 01:39:31.085 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-13 01:39:40.617454 | orchestrator | --- import errors --- 2026-01-13 01:39:40.617459 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-13 01:39:40.617464 | orchestrator | Traceback (most recent call last): 2026-01-13 01:39:40.617470 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-13 01:39:40.617475 | orchestrator | module = self._get_module_from_name(name) 2026-01-13 01:39:40.617493 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-13 01:39:40.617502 | orchestrator | __import__(name) 2026-01-13 01:39:40.617509 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-13 01:39:40.617520 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-13 01:39:40.617527 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-13 01:39:40.617537 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-13 01:39:40.617544 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-13 01:39:40.617552 | orchestrator | 2026-01-13 01:39:40.617563 | orchestrator | ================================================================================ 2026-01-13 01:39:40.617570 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-13 01:39:40.992320 | orchestrator | 2026-01-13 01:39:40.992385 | orchestrator | ## COMPUTE (API) 2026-01-13 01:39:40.992391 | orchestrator | 2026-01-13 01:39:40.992396 | orchestrator | + echo 2026-01-13 01:39:40.992401 | orchestrator | + echo '## COMPUTE (API)' 2026-01-13 01:39:40.992423 | orchestrator | + echo 2026-01-13 01:39:40.992427 | orchestrator | + _tempest tempest.api.compute 2026-01-13 01:39:40.992431 | orchestrator | + local regex=tempest.api.compute 2026-01-13 01:39:40.992734 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-01-13 01:39:40.993365 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-13 01:39:40.997583 | orchestrator | + tee -a /opt/tempest/20260113-0139.log 2026-01-13 01:39:44.827427 | orchestrator | 2026-01-13 01:39:44.830 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-13 01:39:44.933149 | orchestrator | 2026-01-13 01:39:44.935 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-13 01:39:44.933267 | orchestrator | 2026-01-13 01:39:44.936 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-13 01:39:44.933280 | orchestrator | 2026-01-13 01:39:44.936 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-13 01:39:44.933287 | orchestrator | 2026-01-13 01:39:44.936 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:44.933295 | orchestrator | 2026-01-13 01:39:44.937 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-13 01:39:44.933302 | orchestrator | 2026-01-13 01:39:44.937 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-13 01:39:44.933307 | orchestrator | 2026-01-13 01:39:44.937 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-13 01:39:44.933311 | orchestrator | 2026-01-13 01:39:44.937 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-13 01:39:44.933368 | orchestrator | 2026-01-13 01:39:44.937 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-13 01:39:44.933813 | orchestrator | 2026-01-13 01:39:44.938 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-13 01:39:44.933875 | orchestrator | 2026-01-13 01:39:44.938 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-13 01:39:44.934455 | orchestrator | 2026-01-13 01:39:44.938 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-13 01:39:44.934501 | orchestrator | 2026-01-13 01:39:44.938 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-13 01:39:44.934508 | orchestrator | 2026-01-13 01:39:44.938 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-13 01:39:44.934512 | orchestrator | 2026-01-13 01:39:44.938 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:44.934549 | orchestrator | 2026-01-13 01:39:44.938 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-13 01:39:44.934555 | orchestrator | 2026-01-13 01:39:44.939 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-13 01:39:44.934559 | orchestrator | 2026-01-13 01:39:44.939 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-13 01:39:44.934563 | orchestrator | 2026-01-13 01:39:44.939 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-13 01:39:44.934567 | orchestrator | 2026-01-13 01:39:44.939 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-13 01:39:44.934646 | orchestrator | 2026-01-13 01:39:44.939 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-13 01:39:44.934656 | orchestrator | 2026-01-13 01:39:44.939 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-13 01:39:58.101546 | orchestrator | 2026-01-13 01:39:58.101655 | orchestrator | ========================= 2026-01-13 01:39:58.101665 | orchestrator | Failures during discovery 2026-01-13 01:39:58.101670 | orchestrator | ========================= 2026-01-13 01:39:58.101674 | orchestrator | --- stdout --- 2026-01-13 01:39:58.101680 | orchestrator | 2026-01-13 01:39:48.477 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-13 01:39:58.101686 | orchestrator | 2026-01-13 01:39:48.479 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-13 01:39:58.101692 | orchestrator | 2026-01-13 01:39:48.479 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-13 01:39:58.101696 | orchestrator | 2026-01-13 01:39:48.479 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-13 01:39:58.101701 | orchestrator | 2026-01-13 01:39:48.479 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:58.101705 | orchestrator | 2026-01-13 01:39:48.480 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-13 01:39:58.101709 | orchestrator | 2026-01-13 01:39:48.480 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-13 01:39:58.101713 | orchestrator | 2026-01-13 01:39:48.480 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-13 01:39:58.101717 | orchestrator | 2026-01-13 01:39:48.480 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-13 01:39:58.101721 | orchestrator | 2026-01-13 01:39:48.480 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-13 01:39:58.101741 | orchestrator | 2026-01-13 01:39:48.481 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-13 01:39:58.101745 | orchestrator | 2026-01-13 01:39:48.481 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-13 01:39:58.101749 | orchestrator | 2026-01-13 01:39:48.481 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-13 01:39:58.101752 | orchestrator | 2026-01-13 01:39:48.482 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-13 01:39:58.101756 | orchestrator | 2026-01-13 01:39:48.482 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-13 01:39:58.101761 | orchestrator | 2026-01-13 01:39:48.482 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:58.101766 | orchestrator | 2026-01-13 01:39:48.482 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-13 01:39:58.101770 | orchestrator | 2026-01-13 01:39:48.482 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-13 01:39:58.101774 | orchestrator | 2026-01-13 01:39:48.482 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-13 01:39:58.101778 | orchestrator | 2026-01-13 01:39:48.482 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-13 01:39:58.101781 | orchestrator | 2026-01-13 01:39:48.482 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-13 01:39:58.101785 | orchestrator | 2026-01-13 01:39:48.482 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-13 01:39:58.101805 | orchestrator | 2026-01-13 01:39:48.482 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-13 01:39:58.101811 | orchestrator | 2026-01-13 01:39:48.485 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-13 01:39:58.101817 | orchestrator | 2026-01-13 01:39:49.298 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-13 01:39:58.101821 | orchestrator | 2026-01-13 01:39:49.298 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-13 01:39:58.101825 | orchestrator | 2026-01-13 01:39:49.299 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-13 01:39:58.101829 | orchestrator | 2026-01-13 01:39:49.299 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:39:58.101845 | orchestrator | 2026-01-13 01:39:49.299 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-13 01:39:58.101849 | orchestrator | 2026-01-13 01:39:49.299 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-13 01:39:58.101853 | orchestrator | 2026-01-13 01:39:49.299 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-13 01:39:58.101857 | orchestrator | 2026-01-13 01:39:49.299 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-13 01:39:58.101861 | orchestrator | 2026-01-13 01:39:49.299 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-13 01:39:58.101865 | orchestrator | 2026-01-13 01:39:49.299 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-13 01:39:58.101868 | orchestrator | 2026-01-13 01:39:49.299 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-13 01:39:58.101872 | orchestrator | --- import errors --- 2026-01-13 01:39:58.101877 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-13 01:39:58.101881 | orchestrator | Traceback (most recent call last): 2026-01-13 01:39:58.101886 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-13 01:39:58.101890 | orchestrator | module = self._get_module_from_name(name) 2026-01-13 01:39:58.101894 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-13 01:39:58.101898 | orchestrator | __import__(name) 2026-01-13 01:39:58.101902 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-13 01:39:58.101906 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-13 01:39:58.101910 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-13 01:39:58.101913 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-13 01:39:58.101917 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-13 01:39:58.101921 | orchestrator | 2026-01-13 01:39:58.101925 | orchestrator | ================================================================================ 2026-01-13 01:39:58.101929 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-13 01:39:58.623986 | orchestrator | 2026-01-13 01:39:58.624081 | orchestrator | ## DNS (API) 2026-01-13 01:39:58.624090 | orchestrator | 2026-01-13 01:39:58.624097 | orchestrator | + echo 2026-01-13 01:39:58.624103 | orchestrator | + echo '## DNS (API)' 2026-01-13 01:39:58.624111 | orchestrator | + echo 2026-01-13 01:39:58.624118 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-01-13 01:39:58.624126 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-01-13 01:39:58.625331 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-01-13 01:39:58.626558 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-13 01:39:58.631032 | orchestrator | + tee -a /opt/tempest/20260113-0139.log 2026-01-13 01:40:02.711621 | orchestrator | 2026-01-13 01:40:02.714 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-13 01:40:02.809459 | orchestrator | 2026-01-13 01:40:02.812 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-13 01:40:02.809491 | orchestrator | 2026-01-13 01:40:02.813 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-13 01:40:02.809496 | orchestrator | 2026-01-13 01:40:02.813 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-13 01:40:02.809500 | orchestrator | 2026-01-13 01:40:02.813 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:40:02.809935 | orchestrator | 2026-01-13 01:40:02.814 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-13 01:40:02.809974 | orchestrator | 2026-01-13 01:40:02.814 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-13 01:40:02.810274 | orchestrator | 2026-01-13 01:40:02.814 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-13 01:40:02.810292 | orchestrator | 2026-01-13 01:40:02.814 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-13 01:40:02.810431 | orchestrator | 2026-01-13 01:40:02.814 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-13 01:40:02.810909 | orchestrator | 2026-01-13 01:40:02.815 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-13 01:40:02.811261 | orchestrator | 2026-01-13 01:40:02.815 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-13 01:40:02.812075 | orchestrator | 2026-01-13 01:40:02.816 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-13 01:40:02.812117 | orchestrator | 2026-01-13 01:40:02.816 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-13 01:40:02.812241 | orchestrator | 2026-01-13 01:40:02.816 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-13 01:40:02.812262 | orchestrator | 2026-01-13 01:40:02.816 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-13 01:40:02.812455 | orchestrator | 2026-01-13 01:40:02.816 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-13 01:40:02.812464 | orchestrator | 2026-01-13 01:40:02.816 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-13 01:40:02.812819 | orchestrator | 2026-01-13 01:40:02.816 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-13 01:40:02.812830 | orchestrator | 2026-01-13 01:40:02.817 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-13 01:40:02.812834 | orchestrator | 2026-01-13 01:40:02.817 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-13 01:40:02.812838 | orchestrator | 2026-01-13 01:40:02.817 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-13 01:40:02.812845 | orchestrator | 2026-01-13 01:40:02.817 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-13 01:40:17.780892 | orchestrator | 2026-01-13 01:40:17.780987 | orchestrator | ========================= 2026-01-13 01:40:17.781001 | orchestrator | Failures during discovery 2026-01-13 01:40:17.781006 | orchestrator | ========================= 2026-01-13 01:40:17.781024 | orchestrator | --- stdout --- 2026-01-13 01:40:17.781030 | orchestrator | 2026-01-13 01:40:06.536 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-13 01:40:17.781035 | orchestrator | 2026-01-13 01:40:06.538 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-13 01:40:17.781041 | orchestrator | 2026-01-13 01:40:06.538 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-13 01:40:17.781046 | orchestrator | 2026-01-13 01:40:06.538 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-13 01:40:17.781050 | orchestrator | 2026-01-13 01:40:06.538 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:40:17.781055 | orchestrator | 2026-01-13 01:40:06.539 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-13 01:40:17.781059 | orchestrator | 2026-01-13 01:40:06.539 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-13 01:40:17.781063 | orchestrator | 2026-01-13 01:40:06.539 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-13 01:40:17.781067 | orchestrator | 2026-01-13 01:40:06.539 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-13 01:40:17.781070 | orchestrator | 2026-01-13 01:40:06.539 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-13 01:40:17.781074 | orchestrator | 2026-01-13 01:40:06.540 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-13 01:40:17.781078 | orchestrator | 2026-01-13 01:40:06.540 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-13 01:40:17.781082 | orchestrator | 2026-01-13 01:40:06.540 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-13 01:40:17.781086 | orchestrator | 2026-01-13 01:40:06.541 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-13 01:40:17.781090 | orchestrator | 2026-01-13 01:40:06.541 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-13 01:40:17.781094 | orchestrator | 2026-01-13 01:40:06.541 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-13 01:40:17.781099 | orchestrator | 2026-01-13 01:40:06.541 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-13 01:40:17.781103 | orchestrator | 2026-01-13 01:40:06.541 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-13 01:40:17.781106 | orchestrator | 2026-01-13 01:40:06.541 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-13 01:40:17.781110 | orchestrator | 2026-01-13 01:40:06.541 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-13 01:40:17.781114 | orchestrator | 2026-01-13 01:40:06.541 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-13 01:40:17.781118 | orchestrator | 2026-01-13 01:40:06.541 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-13 01:40:17.781122 | orchestrator | 2026-01-13 01:40:06.541 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-13 01:40:17.781144 | orchestrator | 2026-01-13 01:40:06.544 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-13 01:40:17.781149 | orchestrator | 2026-01-13 01:40:07.373 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-13 01:40:17.781154 | orchestrator | 2026-01-13 01:40:07.373 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-13 01:40:17.781158 | orchestrator | 2026-01-13 01:40:07.373 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-13 01:40:17.781162 | orchestrator | 2026-01-13 01:40:07.373 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:40:17.781176 | orchestrator | 2026-01-13 01:40:07.373 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-13 01:40:17.781180 | orchestrator | 2026-01-13 01:40:07.374 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-13 01:40:17.781184 | orchestrator | 2026-01-13 01:40:07.374 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-13 01:40:17.781188 | orchestrator | 2026-01-13 01:40:07.374 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-13 01:40:17.781192 | orchestrator | 2026-01-13 01:40:07.374 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-13 01:40:17.781195 | orchestrator | 2026-01-13 01:40:07.374 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-13 01:40:17.781199 | orchestrator | 2026-01-13 01:40:07.374 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-13 01:40:17.781203 | orchestrator | --- import errors --- 2026-01-13 01:40:17.781208 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-13 01:40:17.781212 | orchestrator | Traceback (most recent call last): 2026-01-13 01:40:17.781216 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-13 01:40:17.781220 | orchestrator | module = self._get_module_from_name(name) 2026-01-13 01:40:17.781224 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-13 01:40:17.781228 | orchestrator | __import__(name) 2026-01-13 01:40:17.781232 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-13 01:40:17.781236 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-13 01:40:17.781240 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-13 01:40:17.781244 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-13 01:40:17.781248 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-13 01:40:17.781252 | orchestrator | 2026-01-13 01:40:17.781255 | orchestrator | ================================================================================ 2026-01-13 01:40:17.781259 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-13 01:40:18.314604 | orchestrator | 2026-01-13 01:40:18.314667 | orchestrator | ## OBJECT-STORE (API) 2026-01-13 01:40:18.314673 | orchestrator | 2026-01-13 01:40:18.314678 | orchestrator | + echo 2026-01-13 01:40:18.314682 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-01-13 01:40:18.314686 | orchestrator | + echo 2026-01-13 01:40:18.314690 | orchestrator | + _tempest tempest.api.object_storage 2026-01-13 01:40:18.314695 | orchestrator | + local regex=tempest.api.object_storage 2026-01-13 01:40:18.315093 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-01-13 01:40:18.317507 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-13 01:40:18.319201 | orchestrator | + tee -a /opt/tempest/20260113-0140.log 2026-01-13 01:40:22.186472 | orchestrator | 2026-01-13 01:40:22.189 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-13 01:40:22.293634 | orchestrator | 2026-01-13 01:40:22.296 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-13 01:40:22.293685 | orchestrator | 2026-01-13 01:40:22.297 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-13 01:40:22.293692 | orchestrator | 2026-01-13 01:40:22.297 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-13 01:40:22.293814 | orchestrator | 2026-01-13 01:40:22.298 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:40:22.294600 | orchestrator | 2026-01-13 01:40:22.298 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-13 01:40:22.294898 | orchestrator | 2026-01-13 01:40:22.298 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-13 01:40:22.294942 | orchestrator | 2026-01-13 01:40:22.298 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-13 01:40:22.294947 | orchestrator | 2026-01-13 01:40:22.299 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-13 01:40:22.294952 | orchestrator | 2026-01-13 01:40:22.299 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-13 01:40:22.296359 | orchestrator | 2026-01-13 01:40:22.299 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-13 01:40:22.296395 | orchestrator | 2026-01-13 01:40:22.299 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-13 01:40:22.296510 | orchestrator | 2026-01-13 01:40:22.300 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-13 01:40:22.296570 | orchestrator | 2026-01-13 01:40:22.300 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-13 01:40:22.296580 | orchestrator | 2026-01-13 01:40:22.300 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-13 01:40:22.296587 | orchestrator | 2026-01-13 01:40:22.300 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-13 01:40:22.296593 | orchestrator | 2026-01-13 01:40:22.300 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-13 01:40:22.297183 | orchestrator | 2026-01-13 01:40:22.300 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-13 01:40:22.297202 | orchestrator | 2026-01-13 01:40:22.300 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-13 01:40:22.297208 | orchestrator | 2026-01-13 01:40:22.300 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-13 01:40:22.297214 | orchestrator | 2026-01-13 01:40:22.301 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-13 01:40:22.297218 | orchestrator | 2026-01-13 01:40:22.301 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-13 01:40:22.297222 | orchestrator | 2026-01-13 01:40:22.301 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-13 01:40:37.197690 | orchestrator | 2026-01-13 01:40:37.197781 | orchestrator | ========================= 2026-01-13 01:40:37.197794 | orchestrator | Failures during discovery 2026-01-13 01:40:37.197802 | orchestrator | ========================= 2026-01-13 01:40:37.197808 | orchestrator | --- stdout --- 2026-01-13 01:40:37.197816 | orchestrator | 2026-01-13 01:40:25.957 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-13 01:40:37.197864 | orchestrator | 2026-01-13 01:40:25.958 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-13 01:40:37.197882 | orchestrator | 2026-01-13 01:40:25.959 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-13 01:40:37.197888 | orchestrator | 2026-01-13 01:40:25.959 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-13 01:40:37.197895 | orchestrator | 2026-01-13 01:40:25.959 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:40:37.197901 | orchestrator | 2026-01-13 01:40:25.960 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-13 01:40:37.197906 | orchestrator | 2026-01-13 01:40:25.960 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-13 01:40:37.197912 | orchestrator | 2026-01-13 01:40:25.960 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-13 01:40:37.197918 | orchestrator | 2026-01-13 01:40:25.960 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-13 01:40:37.197925 | orchestrator | 2026-01-13 01:40:25.960 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-13 01:40:37.197931 | orchestrator | 2026-01-13 01:40:25.961 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-13 01:40:37.197937 | orchestrator | 2026-01-13 01:40:25.961 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-13 01:40:37.197943 | orchestrator | 2026-01-13 01:40:25.962 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-13 01:40:37.197950 | orchestrator | 2026-01-13 01:40:25.962 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-13 01:40:37.197956 | orchestrator | 2026-01-13 01:40:25.962 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-13 01:40:37.197962 | orchestrator | 2026-01-13 01:40:25.962 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-13 01:40:37.197970 | orchestrator | 2026-01-13 01:40:25.962 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-13 01:40:37.197976 | orchestrator | 2026-01-13 01:40:25.962 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-13 01:40:37.197982 | orchestrator | 2026-01-13 01:40:25.962 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-13 01:40:37.197988 | orchestrator | 2026-01-13 01:40:25.962 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-13 01:40:37.197994 | orchestrator | 2026-01-13 01:40:25.962 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-13 01:40:37.197999 | orchestrator | 2026-01-13 01:40:25.962 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-13 01:40:37.198005 | orchestrator | 2026-01-13 01:40:25.962 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-13 01:40:37.198061 | orchestrator | 2026-01-13 01:40:25.965 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-13 01:40:37.198072 | orchestrator | 2026-01-13 01:40:26.800 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-13 01:40:37.198089 | orchestrator | 2026-01-13 01:40:26.800 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-13 01:40:37.198095 | orchestrator | 2026-01-13 01:40:26.800 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-13 01:40:37.198102 | orchestrator | 2026-01-13 01:40:26.800 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-13 01:40:37.198126 | orchestrator | 2026-01-13 01:40:26.800 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-13 01:40:37.198133 | orchestrator | 2026-01-13 01:40:26.800 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-13 01:40:37.198139 | orchestrator | 2026-01-13 01:40:26.800 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-13 01:40:37.198145 | orchestrator | 2026-01-13 01:40:26.800 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-13 01:40:37.198150 | orchestrator | 2026-01-13 01:40:26.801 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-13 01:40:37.198157 | orchestrator | 2026-01-13 01:40:26.801 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-13 01:40:37.198163 | orchestrator | 2026-01-13 01:40:26.801 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-13 01:40:37.198170 | orchestrator | --- import errors --- 2026-01-13 01:40:37.198177 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-13 01:40:37.198183 | orchestrator | Traceback (most recent call last): 2026-01-13 01:40:37.198206 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-13 01:40:37.198212 | orchestrator | module = self._get_module_from_name(name) 2026-01-13 01:40:37.198218 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-13 01:40:37.198224 | orchestrator | __import__(name) 2026-01-13 01:40:37.198230 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-13 01:40:37.198236 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-13 01:40:37.198243 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-13 01:40:37.198249 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-13 01:40:37.198255 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-13 01:40:37.198261 | orchestrator | 2026-01-13 01:40:37.198268 | orchestrator | ================================================================================ 2026-01-13 01:40:37.198274 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-13 01:40:37.805870 | orchestrator | ok: Runtime: 0:03:43.199075 2026-01-13 01:40:37.833291 | 2026-01-13 01:40:37.833439 | TASK [Check prometheus alert status] 2026-01-13 01:40:38.374112 | orchestrator | skipping: Conditional result was False 2026-01-13 01:40:38.378417 | 2026-01-13 01:40:38.378602 | PLAY RECAP 2026-01-13 01:40:38.378748 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-01-13 01:40:38.378815 | 2026-01-13 01:40:38.627168 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-13 01:40:38.629842 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-13 01:40:39.468123 | 2026-01-13 01:40:39.468297 | PLAY [Post output play] 2026-01-13 01:40:39.485414 | 2026-01-13 01:40:39.485582 | LOOP [stage-output : Register sources] 2026-01-13 01:40:39.548194 | 2026-01-13 01:40:39.548443 | TASK [stage-output : Check sudo] 2026-01-13 01:40:40.514422 | orchestrator | sudo: a password is required 2026-01-13 01:40:40.586096 | orchestrator | ok: Runtime: 0:00:00.013619 2026-01-13 01:40:40.593405 | 2026-01-13 01:40:40.593522 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-13 01:40:40.625163 | 2026-01-13 01:40:40.625386 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-13 01:40:40.702680 | orchestrator | ok 2026-01-13 01:40:40.713642 | 2026-01-13 01:40:40.713873 | LOOP [stage-output : Ensure target folders exist] 2026-01-13 01:40:41.250758 | orchestrator | ok: "docs" 2026-01-13 01:40:41.251214 | 2026-01-13 01:40:41.479147 | orchestrator | ok: "artifacts" 2026-01-13 01:40:41.733430 | orchestrator | ok: "logs" 2026-01-13 01:40:41.751437 | 2026-01-13 01:40:41.751595 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-13 01:40:41.790150 | 2026-01-13 01:40:41.790498 | TASK [stage-output : Make all log files readable] 2026-01-13 01:40:42.098392 | orchestrator | ok 2026-01-13 01:40:42.109824 | 2026-01-13 01:40:42.110059 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-13 01:40:42.145180 | orchestrator | skipping: Conditional result was False 2026-01-13 01:40:42.164545 | 2026-01-13 01:40:42.164818 | TASK [stage-output : Discover log files for compression] 2026-01-13 01:40:42.190057 | orchestrator | skipping: Conditional result was False 2026-01-13 01:40:42.203713 | 2026-01-13 01:40:42.203879 | LOOP [stage-output : Archive everything from logs] 2026-01-13 01:40:42.242913 | 2026-01-13 01:40:42.243156 | PLAY [Post cleanup play] 2026-01-13 01:40:42.253820 | 2026-01-13 01:40:42.254032 | TASK [Set cloud fact (Zuul deployment)] 2026-01-13 01:40:42.321608 | orchestrator | ok 2026-01-13 01:40:42.332635 | 2026-01-13 01:40:42.332770 | TASK [Set cloud fact (local deployment)] 2026-01-13 01:40:42.377428 | orchestrator | skipping: Conditional result was False 2026-01-13 01:40:42.395252 | 2026-01-13 01:40:42.395454 | TASK [Clean the cloud environment] 2026-01-13 01:40:43.553124 | orchestrator | 2026-01-13 01:40:43 - clean up servers 2026-01-13 01:40:44.327003 | orchestrator | 2026-01-13 01:40:44 - testbed-manager 2026-01-13 01:40:44.410187 | orchestrator | 2026-01-13 01:40:44 - testbed-node-0 2026-01-13 01:40:44.497816 | orchestrator | 2026-01-13 01:40:44 - testbed-node-2 2026-01-13 01:40:44.589679 | orchestrator | 2026-01-13 01:40:44 - testbed-node-4 2026-01-13 01:40:44.682203 | orchestrator | 2026-01-13 01:40:44 - testbed-node-3 2026-01-13 01:40:44.775986 | orchestrator | 2026-01-13 01:40:44 - testbed-node-5 2026-01-13 01:40:44.893225 | orchestrator | 2026-01-13 01:40:44 - testbed-node-1 2026-01-13 01:40:44.981845 | orchestrator | 2026-01-13 01:40:44 - clean up keypairs 2026-01-13 01:40:45.001024 | orchestrator | 2026-01-13 01:40:45 - testbed 2026-01-13 01:40:45.027866 | orchestrator | 2026-01-13 01:40:45 - wait for servers to be gone 2026-01-13 01:40:55.850830 | orchestrator | 2026-01-13 01:40:55 - clean up ports 2026-01-13 01:40:56.047928 | orchestrator | 2026-01-13 01:40:56 - 033250be-3730-4e28-8bae-574aa81261e3 2026-01-13 01:40:56.306784 | orchestrator | 2026-01-13 01:40:56 - 21f6ad4e-2daf-49cf-9f3a-83bd124ef2b6 2026-01-13 01:40:57.269686 | orchestrator | 2026-01-13 01:40:57 - 5133aec3-bcc8-4613-a099-273522a85686 2026-01-13 01:40:57.511928 | orchestrator | 2026-01-13 01:40:57 - 93e66259-6123-483f-83e0-5ca60df99fb6 2026-01-13 01:40:57.716405 | orchestrator | 2026-01-13 01:40:57 - b46751e7-e6e1-4843-8018-ec0fe26ce24c 2026-01-13 01:40:57.936653 | orchestrator | 2026-01-13 01:40:57 - dc661a00-4fe6-413a-9683-d66479df831d 2026-01-13 01:40:58.156182 | orchestrator | 2026-01-13 01:40:58 - f9c66452-e535-44f2-abde-2653138d6638 2026-01-13 01:40:58.387694 | orchestrator | 2026-01-13 01:40:58 - clean up volumes 2026-01-13 01:40:58.511110 | orchestrator | 2026-01-13 01:40:58 - testbed-volume-3-node-base 2026-01-13 01:40:58.551177 | orchestrator | 2026-01-13 01:40:58 - testbed-volume-2-node-base 2026-01-13 01:40:58.592058 | orchestrator | 2026-01-13 01:40:58 - testbed-volume-manager-base 2026-01-13 01:40:58.632480 | orchestrator | 2026-01-13 01:40:58 - testbed-volume-1-node-base 2026-01-13 01:40:58.675418 | orchestrator | 2026-01-13 01:40:58 - testbed-volume-4-node-base 2026-01-13 01:40:58.715165 | orchestrator | 2026-01-13 01:40:58 - testbed-volume-5-node-base 2026-01-13 01:40:58.758338 | orchestrator | 2026-01-13 01:40:58 - testbed-volume-0-node-base 2026-01-13 01:40:58.801700 | orchestrator | 2026-01-13 01:40:58 - testbed-volume-8-node-5 2026-01-13 01:40:58.841416 | orchestrator | 2026-01-13 01:40:58 - testbed-volume-1-node-4 2026-01-13 01:40:58.885118 | orchestrator | 2026-01-13 01:40:58 - testbed-volume-2-node-5 2026-01-13 01:40:58.928002 | orchestrator | 2026-01-13 01:40:58 - testbed-volume-4-node-4 2026-01-13 01:40:58.968133 | orchestrator | 2026-01-13 01:40:58 - testbed-volume-7-node-4 2026-01-13 01:40:59.015239 | orchestrator | 2026-01-13 01:40:59 - testbed-volume-5-node-5 2026-01-13 01:40:59.056520 | orchestrator | 2026-01-13 01:40:59 - testbed-volume-6-node-3 2026-01-13 01:40:59.102313 | orchestrator | 2026-01-13 01:40:59 - testbed-volume-3-node-3 2026-01-13 01:40:59.139942 | orchestrator | 2026-01-13 01:40:59 - testbed-volume-0-node-3 2026-01-13 01:40:59.181110 | orchestrator | 2026-01-13 01:40:59 - disconnect routers 2026-01-13 01:40:59.322625 | orchestrator | 2026-01-13 01:40:59 - testbed 2026-01-13 01:41:00.415083 | orchestrator | 2026-01-13 01:41:00 - clean up subnets 2026-01-13 01:41:00.454589 | orchestrator | 2026-01-13 01:41:00 - subnet-testbed-management 2026-01-13 01:41:00.596697 | orchestrator | 2026-01-13 01:41:00 - clean up networks 2026-01-13 01:41:00.766960 | orchestrator | 2026-01-13 01:41:00 - net-testbed-management 2026-01-13 01:41:01.197461 | orchestrator | 2026-01-13 01:41:01 - clean up security groups 2026-01-13 01:41:01.246185 | orchestrator | 2026-01-13 01:41:01 - testbed-node 2026-01-13 01:41:01.362854 | orchestrator | 2026-01-13 01:41:01 - testbed-management 2026-01-13 01:41:01.475363 | orchestrator | 2026-01-13 01:41:01 - clean up floating ips 2026-01-13 01:41:01.507376 | orchestrator | 2026-01-13 01:41:01 - 81.163.193.234 2026-01-13 01:41:01.869302 | orchestrator | 2026-01-13 01:41:01 - clean up routers 2026-01-13 01:41:01.927665 | orchestrator | 2026-01-13 01:41:01 - testbed 2026-01-13 01:41:03.459785 | orchestrator | ok: Runtime: 0:00:20.620708 2026-01-13 01:41:03.464871 | 2026-01-13 01:41:03.465066 | PLAY RECAP 2026-01-13 01:41:03.465168 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-13 01:41:03.465216 | 2026-01-13 01:41:03.614051 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-13 01:41:03.615239 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-13 01:41:04.392668 | 2026-01-13 01:41:04.392847 | PLAY [Cleanup play] 2026-01-13 01:41:04.410203 | 2026-01-13 01:41:04.410370 | TASK [Set cloud fact (Zuul deployment)] 2026-01-13 01:41:04.476292 | orchestrator | ok 2026-01-13 01:41:04.488053 | 2026-01-13 01:41:04.488244 | TASK [Set cloud fact (local deployment)] 2026-01-13 01:41:04.523767 | orchestrator | skipping: Conditional result was False 2026-01-13 01:41:04.542760 | 2026-01-13 01:41:04.543076 | TASK [Clean the cloud environment] 2026-01-13 01:41:05.817714 | orchestrator | 2026-01-13 01:41:05 - clean up servers 2026-01-13 01:41:06.309322 | orchestrator | 2026-01-13 01:41:06 - clean up keypairs 2026-01-13 01:41:06.334741 | orchestrator | 2026-01-13 01:41:06 - wait for servers to be gone 2026-01-13 01:41:06.381808 | orchestrator | 2026-01-13 01:41:06 - clean up ports 2026-01-13 01:41:06.500670 | orchestrator | 2026-01-13 01:41:06 - clean up volumes 2026-01-13 01:41:06.587723 | orchestrator | 2026-01-13 01:41:06 - disconnect routers 2026-01-13 01:41:06.621656 | orchestrator | 2026-01-13 01:41:06 - clean up subnets 2026-01-13 01:41:06.648874 | orchestrator | 2026-01-13 01:41:06 - clean up networks 2026-01-13 01:41:06.801976 | orchestrator | 2026-01-13 01:41:06 - clean up security groups 2026-01-13 01:41:06.840072 | orchestrator | 2026-01-13 01:41:06 - clean up floating ips 2026-01-13 01:41:06.863409 | orchestrator | 2026-01-13 01:41:06 - clean up routers 2026-01-13 01:41:07.099048 | orchestrator | ok: Runtime: 0:00:01.500762 2026-01-13 01:41:07.103022 | 2026-01-13 01:41:07.103190 | PLAY RECAP 2026-01-13 01:41:07.103329 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-13 01:41:07.103400 | 2026-01-13 01:41:07.256446 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-13 01:41:07.257536 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-13 01:41:08.019778 | 2026-01-13 01:41:08.019981 | PLAY [Base post-fetch] 2026-01-13 01:41:08.036586 | 2026-01-13 01:41:08.036761 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-13 01:41:08.093767 | orchestrator | skipping: Conditional result was False 2026-01-13 01:41:08.113668 | 2026-01-13 01:41:08.114069 | TASK [fetch-output : Set log path for single node] 2026-01-13 01:41:08.169947 | orchestrator | ok 2026-01-13 01:41:08.179283 | 2026-01-13 01:41:08.179440 | LOOP [fetch-output : Ensure local output dirs] 2026-01-13 01:41:08.685419 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/ca76e7f80cbb4cdda68907de4afef11c/work/logs" 2026-01-13 01:41:08.987511 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/ca76e7f80cbb4cdda68907de4afef11c/work/artifacts" 2026-01-13 01:41:09.290405 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/ca76e7f80cbb4cdda68907de4afef11c/work/docs" 2026-01-13 01:41:09.320247 | 2026-01-13 01:41:09.320425 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-13 01:41:10.278447 | orchestrator | changed: .d..t...... ./ 2026-01-13 01:41:10.278698 | orchestrator | changed: All items complete 2026-01-13 01:41:10.278736 | 2026-01-13 01:41:10.994697 | orchestrator | changed: .d..t...... ./ 2026-01-13 01:41:11.747991 | orchestrator | changed: .d..t...... ./ 2026-01-13 01:41:11.768768 | 2026-01-13 01:41:11.768913 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-13 01:41:11.809178 | orchestrator | skipping: Conditional result was False 2026-01-13 01:41:11.811243 | orchestrator | skipping: Conditional result was False 2026-01-13 01:41:11.831562 | 2026-01-13 01:41:11.831772 | PLAY RECAP 2026-01-13 01:41:11.831895 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-13 01:41:11.831989 | 2026-01-13 01:41:11.977390 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-13 01:41:11.980302 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-13 01:41:12.761585 | 2026-01-13 01:41:12.761770 | PLAY [Base post] 2026-01-13 01:41:12.777901 | 2026-01-13 01:41:12.778095 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-13 01:41:13.796538 | orchestrator | changed 2026-01-13 01:41:13.808787 | 2026-01-13 01:41:13.809013 | PLAY RECAP 2026-01-13 01:41:13.809128 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-13 01:41:13.809240 | 2026-01-13 01:41:13.944387 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-13 01:41:13.948532 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-13 01:41:14.788942 | 2026-01-13 01:41:14.789122 | PLAY [Base post-logs] 2026-01-13 01:41:14.800226 | 2026-01-13 01:41:14.800379 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-13 01:41:15.295801 | localhost | changed 2026-01-13 01:41:15.306812 | 2026-01-13 01:41:15.307111 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-13 01:41:15.345997 | localhost | ok 2026-01-13 01:41:15.352769 | 2026-01-13 01:41:15.352953 | TASK [Set zuul-log-path fact] 2026-01-13 01:41:15.371652 | localhost | ok 2026-01-13 01:41:15.386246 | 2026-01-13 01:41:15.386520 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-13 01:41:15.425974 | localhost | ok 2026-01-13 01:41:15.434485 | 2026-01-13 01:41:15.434697 | TASK [upload-logs : Create log directories] 2026-01-13 01:41:15.952577 | localhost | changed 2026-01-13 01:41:15.955705 | 2026-01-13 01:41:15.955820 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-13 01:41:16.521448 | localhost -> localhost | ok: Runtime: 0:00:00.007632 2026-01-13 01:41:16.533554 | 2026-01-13 01:41:16.533812 | TASK [upload-logs : Upload logs to log server] 2026-01-13 01:41:17.176188 | localhost | Output suppressed because no_log was given 2026-01-13 01:41:17.178759 | 2026-01-13 01:41:17.178957 | LOOP [upload-logs : Compress console log and json output] 2026-01-13 01:41:17.235391 | localhost | skipping: Conditional result was False 2026-01-13 01:41:17.243683 | localhost | skipping: Conditional result was False 2026-01-13 01:41:17.249153 | 2026-01-13 01:41:17.249280 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-13 01:41:17.295542 | localhost | skipping: Conditional result was False 2026-01-13 01:41:17.295845 | 2026-01-13 01:41:17.302974 | localhost | skipping: Conditional result was False 2026-01-13 01:41:17.310572 | 2026-01-13 01:41:17.310879 | LOOP [upload-logs : Upload console log and json output]