2026-02-02 00:00:07.967943 | Job console starting 2026-02-02 00:00:07.999497 | Updating git repos 2026-02-02 00:00:08.283411 | Cloning repos into workspace 2026-02-02 00:00:08.563374 | Restoring repo states 2026-02-02 00:00:08.596110 | Merging changes 2026-02-02 00:00:08.596127 | Checking out repos 2026-02-02 00:00:08.900105 | Preparing playbooks 2026-02-02 00:00:09.990377 | Running Ansible setup 2026-02-02 00:00:18.433197 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-02 00:00:21.319494 | 2026-02-02 00:00:21.319625 | PLAY [Base pre] 2026-02-02 00:00:21.414784 | 2026-02-02 00:00:21.414964 | TASK [Setup log path fact] 2026-02-02 00:00:21.470810 | orchestrator | ok 2026-02-02 00:00:21.527642 | 2026-02-02 00:00:21.528359 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-02 00:00:21.602734 | orchestrator | ok 2026-02-02 00:00:21.648465 | 2026-02-02 00:00:21.648587 | TASK [emit-job-header : Print job information] 2026-02-02 00:00:21.740244 | # Job Information 2026-02-02 00:00:21.740422 | Ansible Version: 2.16.14 2026-02-02 00:00:21.740458 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2026-02-02 00:00:21.740490 | Pipeline: periodic-midnight 2026-02-02 00:00:21.740513 | Executor: 521e9411259a 2026-02-02 00:00:21.740534 | Triggered by: https://github.com/osism/testbed 2026-02-02 00:00:21.740556 | Event ID: c92ce9b72d3f4113a0422af615ebd436 2026-02-02 00:00:21.755651 | 2026-02-02 00:00:21.755808 | LOOP [emit-job-header : Print node information] 2026-02-02 00:00:22.108687 | orchestrator | ok: 2026-02-02 00:00:22.108917 | orchestrator | # Node Information 2026-02-02 00:00:22.109240 | orchestrator | Inventory Hostname: orchestrator 2026-02-02 00:00:22.109329 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-02 00:00:22.109352 | orchestrator | Username: zuul-testbed01 2026-02-02 00:00:22.109372 | orchestrator | Distro: Debian 12.13 2026-02-02 00:00:22.109393 | orchestrator | Provider: static-testbed 2026-02-02 00:00:22.109411 | orchestrator | Region: 2026-02-02 00:00:22.109429 | orchestrator | Label: testbed-orchestrator 2026-02-02 00:00:22.109446 | orchestrator | Product Name: OpenStack Nova 2026-02-02 00:00:22.109463 | orchestrator | Interface IP: 81.163.193.140 2026-02-02 00:00:22.128414 | 2026-02-02 00:00:22.128563 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-02 00:00:23.828038 | orchestrator -> localhost | changed 2026-02-02 00:00:23.838509 | 2026-02-02 00:00:23.838606 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-02 00:00:26.654593 | orchestrator -> localhost | changed 2026-02-02 00:00:26.666425 | 2026-02-02 00:00:26.666526 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-02 00:00:27.667404 | orchestrator -> localhost | ok 2026-02-02 00:00:27.672955 | 2026-02-02 00:00:27.673041 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-02 00:00:27.731402 | orchestrator | ok 2026-02-02 00:00:27.779178 | orchestrator | included: /var/lib/zuul/builds/b4e47cf19b7542679b401536a50ab8f8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-02 00:00:27.805208 | 2026-02-02 00:00:27.805298 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-02 00:00:30.965699 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-02 00:00:30.966940 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/b4e47cf19b7542679b401536a50ab8f8/work/b4e47cf19b7542679b401536a50ab8f8_id_rsa 2026-02-02 00:00:30.967003 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/b4e47cf19b7542679b401536a50ab8f8/work/b4e47cf19b7542679b401536a50ab8f8_id_rsa.pub 2026-02-02 00:00:30.967026 | orchestrator -> localhost | The key fingerprint is: 2026-02-02 00:00:30.967050 | orchestrator -> localhost | SHA256:ZlmsFcnqnBz1B+AaMKCc6WhADujMs2uBk9gbsfCDGR0 zuul-build-sshkey 2026-02-02 00:00:30.967069 | orchestrator -> localhost | The key's randomart image is: 2026-02-02 00:00:30.967095 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-02 00:00:30.967114 | orchestrator -> localhost | |o. ..o .oo | 2026-02-02 00:00:30.967132 | orchestrator -> localhost | |=.E+ o o+.. | 2026-02-02 00:00:30.967149 | orchestrator -> localhost | |=o=. .o=. . | 2026-02-02 00:00:30.967166 | orchestrator -> localhost | |+Bo oB . . | 2026-02-02 00:00:30.967182 | orchestrator -> localhost | |+X+o +So . | 2026-02-02 00:00:30.967203 | orchestrator -> localhost | |Oo* o= | 2026-02-02 00:00:30.967221 | orchestrator -> localhost | | .o+ | 2026-02-02 00:00:30.967237 | orchestrator -> localhost | | o. | 2026-02-02 00:00:30.967255 | orchestrator -> localhost | |. | 2026-02-02 00:00:30.967272 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-02 00:00:30.967321 | orchestrator -> localhost | ok: Runtime: 0:00:01.868592 2026-02-02 00:00:30.973171 | 2026-02-02 00:00:30.973249 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-02 00:00:31.005755 | orchestrator | ok 2026-02-02 00:00:31.032803 | orchestrator | included: /var/lib/zuul/builds/b4e47cf19b7542679b401536a50ab8f8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-02 00:00:31.051337 | 2026-02-02 00:00:31.051436 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-02 00:00:31.100074 | orchestrator | skipping: Conditional result was False 2026-02-02 00:00:31.106302 | 2026-02-02 00:00:31.106387 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-02 00:00:32.104565 | orchestrator | changed 2026-02-02 00:00:32.126860 | 2026-02-02 00:00:32.127019 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-02 00:00:32.419787 | orchestrator | ok 2026-02-02 00:00:32.425207 | 2026-02-02 00:00:32.425288 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-02 00:00:32.958224 | orchestrator | ok 2026-02-02 00:00:32.967929 | 2026-02-02 00:00:32.968023 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-02 00:00:33.447148 | orchestrator | ok 2026-02-02 00:00:33.457421 | 2026-02-02 00:00:33.457515 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-02 00:00:33.527257 | orchestrator | skipping: Conditional result was False 2026-02-02 00:00:33.532797 | 2026-02-02 00:00:33.532885 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-02 00:00:34.551442 | orchestrator -> localhost | changed 2026-02-02 00:00:34.580984 | 2026-02-02 00:00:34.581090 | TASK [add-build-sshkey : Add back temp key] 2026-02-02 00:00:35.557442 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/b4e47cf19b7542679b401536a50ab8f8/work/b4e47cf19b7542679b401536a50ab8f8_id_rsa (zuul-build-sshkey) 2026-02-02 00:00:35.557632 | orchestrator -> localhost | ok: Runtime: 0:00:00.033716 2026-02-02 00:00:35.566003 | 2026-02-02 00:00:35.566094 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-02 00:00:36.087615 | orchestrator | ok 2026-02-02 00:00:36.092325 | 2026-02-02 00:00:36.092398 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-02 00:00:36.131233 | orchestrator | skipping: Conditional result was False 2026-02-02 00:00:36.204017 | 2026-02-02 00:00:36.204114 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-02 00:00:36.783221 | orchestrator | ok 2026-02-02 00:00:36.808761 | 2026-02-02 00:00:36.808859 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-02 00:00:36.892092 | orchestrator | ok 2026-02-02 00:00:36.898003 | 2026-02-02 00:00:36.898085 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-02 00:00:37.677031 | orchestrator -> localhost | ok 2026-02-02 00:00:37.682917 | 2026-02-02 00:00:37.683003 | TASK [validate-host : Collect information about the host] 2026-02-02 00:00:39.537101 | orchestrator | ok 2026-02-02 00:00:39.559084 | 2026-02-02 00:00:39.559193 | TASK [validate-host : Sanitize hostname] 2026-02-02 00:00:39.609276 | orchestrator | ok 2026-02-02 00:00:39.614461 | 2026-02-02 00:00:39.614540 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-02 00:00:40.723322 | orchestrator -> localhost | changed 2026-02-02 00:00:40.729241 | 2026-02-02 00:00:40.729327 | TASK [validate-host : Collect information about zuul worker] 2026-02-02 00:00:41.191927 | orchestrator | ok 2026-02-02 00:00:41.197161 | 2026-02-02 00:00:41.197244 | TASK [validate-host : Write out all zuul information for each host] 2026-02-02 00:00:42.301814 | orchestrator -> localhost | changed 2026-02-02 00:00:42.310320 | 2026-02-02 00:00:42.310404 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-02 00:00:42.652849 | orchestrator | ok 2026-02-02 00:00:42.658815 | 2026-02-02 00:00:42.658909 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-02 00:02:10.146885 | orchestrator | changed: 2026-02-02 00:02:10.147116 | orchestrator | .d..t...... src/ 2026-02-02 00:02:10.147152 | orchestrator | .d..t...... src/github.com/ 2026-02-02 00:02:10.147178 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-02 00:02:10.147200 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-02 00:02:10.147221 | orchestrator | RedHat.yml 2026-02-02 00:02:10.162591 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-02 00:02:10.162608 | orchestrator | RedHat.yml 2026-02-02 00:02:10.162661 | orchestrator | = 1.53.0"... 2026-02-02 00:02:26.880868 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-02 00:02:26.900890 | orchestrator | - Finding latest version of hashicorp/null... 2026-02-02 00:02:27.574108 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-02 00:02:28.442727 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-02 00:02:28.509231 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-02 00:02:29.045875 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-02 00:02:29.115268 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-02 00:02:29.639885 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-02 00:02:29.639954 | orchestrator | 2026-02-02 00:02:29.639961 | orchestrator | Providers are signed by their developers. 2026-02-02 00:02:29.639967 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-02 00:02:29.639971 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-02 00:02:29.639978 | orchestrator | 2026-02-02 00:02:29.639982 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-02 00:02:29.639993 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-02 00:02:29.640006 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-02 00:02:29.640010 | orchestrator | you run "tofu init" in the future. 2026-02-02 00:02:29.640232 | orchestrator | 2026-02-02 00:02:29.640251 | orchestrator | OpenTofu has been successfully initialized! 2026-02-02 00:02:29.640268 | orchestrator | 2026-02-02 00:02:29.640274 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-02 00:02:29.640278 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-02 00:02:29.640282 | orchestrator | should now work. 2026-02-02 00:02:29.640286 | orchestrator | 2026-02-02 00:02:29.640293 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-02 00:02:29.640297 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-02 00:02:29.640302 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-02 00:02:29.912542 | orchestrator | Created and switched to workspace "ci"! 2026-02-02 00:02:29.912621 | orchestrator | 2026-02-02 00:02:29.912635 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-02 00:02:29.912645 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-02 00:02:29.912652 | orchestrator | for this configuration. 2026-02-02 00:02:30.162125 | orchestrator | ci.auto.tfvars 2026-02-02 00:02:30.166060 | orchestrator | default_custom.tf 2026-02-02 00:02:31.357832 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-02 00:02:31.979812 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-02 00:02:34.251357 | orchestrator | 2026-02-02 00:02:34.251386 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-02 00:02:34.251393 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-02 00:02:34.251397 | orchestrator | + create 2026-02-02 00:02:34.251401 | orchestrator | <= read (data resources) 2026-02-02 00:02:34.251407 | orchestrator | 2026-02-02 00:02:34.251411 | orchestrator | OpenTofu will perform the following actions: 2026-02-02 00:02:34.251415 | orchestrator | 2026-02-02 00:02:34.251419 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-02 00:02:34.251423 | orchestrator | # (config refers to values not yet known) 2026-02-02 00:02:34.251427 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-02 00:02:34.251431 | orchestrator | + checksum = (known after apply) 2026-02-02 00:02:34.251445 | orchestrator | + created_at = (known after apply) 2026-02-02 00:02:34.251449 | orchestrator | + file = (known after apply) 2026-02-02 00:02:34.251453 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.251478 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.251482 | orchestrator | + min_disk_gb = (known after apply) 2026-02-02 00:02:34.251486 | orchestrator | + min_ram_mb = (known after apply) 2026-02-02 00:02:34.251490 | orchestrator | + most_recent = true 2026-02-02 00:02:34.251494 | orchestrator | + name = (known after apply) 2026-02-02 00:02:34.251498 | orchestrator | + protected = (known after apply) 2026-02-02 00:02:34.251502 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.251507 | orchestrator | + schema = (known after apply) 2026-02-02 00:02:34.251511 | orchestrator | + size_bytes = (known after apply) 2026-02-02 00:02:34.251515 | orchestrator | + tags = (known after apply) 2026-02-02 00:02:34.251519 | orchestrator | + updated_at = (known after apply) 2026-02-02 00:02:34.251523 | orchestrator | } 2026-02-02 00:02:34.251527 | orchestrator | 2026-02-02 00:02:34.251531 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-02 00:02:34.251535 | orchestrator | # (config refers to values not yet known) 2026-02-02 00:02:34.251539 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-02 00:02:34.251543 | orchestrator | + checksum = (known after apply) 2026-02-02 00:02:34.251547 | orchestrator | + created_at = (known after apply) 2026-02-02 00:02:34.251551 | orchestrator | + file = (known after apply) 2026-02-02 00:02:34.251555 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.251559 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.251563 | orchestrator | + min_disk_gb = (known after apply) 2026-02-02 00:02:34.251567 | orchestrator | + min_ram_mb = (known after apply) 2026-02-02 00:02:34.251570 | orchestrator | + most_recent = true 2026-02-02 00:02:34.251574 | orchestrator | + name = (known after apply) 2026-02-02 00:02:34.251578 | orchestrator | + protected = (known after apply) 2026-02-02 00:02:34.251582 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.251586 | orchestrator | + schema = (known after apply) 2026-02-02 00:02:34.251590 | orchestrator | + size_bytes = (known after apply) 2026-02-02 00:02:34.251593 | orchestrator | + tags = (known after apply) 2026-02-02 00:02:34.251597 | orchestrator | + updated_at = (known after apply) 2026-02-02 00:02:34.251601 | orchestrator | } 2026-02-02 00:02:34.251605 | orchestrator | 2026-02-02 00:02:34.251609 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-02 00:02:34.251613 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-02 00:02:34.251616 | orchestrator | + content = (known after apply) 2026-02-02 00:02:34.251621 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-02 00:02:34.251624 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-02 00:02:34.251628 | orchestrator | + content_md5 = (known after apply) 2026-02-02 00:02:34.251632 | orchestrator | + content_sha1 = (known after apply) 2026-02-02 00:02:34.251636 | orchestrator | + content_sha256 = (known after apply) 2026-02-02 00:02:34.251640 | orchestrator | + content_sha512 = (known after apply) 2026-02-02 00:02:34.251643 | orchestrator | + directory_permission = "0777" 2026-02-02 00:02:34.251647 | orchestrator | + file_permission = "0644" 2026-02-02 00:02:34.251651 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-02 00:02:34.251655 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.251659 | orchestrator | } 2026-02-02 00:02:34.251662 | orchestrator | 2026-02-02 00:02:34.251666 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-02 00:02:34.251670 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-02 00:02:34.251674 | orchestrator | + content = (known after apply) 2026-02-02 00:02:34.251678 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-02 00:02:34.251682 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-02 00:02:34.251685 | orchestrator | + content_md5 = (known after apply) 2026-02-02 00:02:34.251689 | orchestrator | + content_sha1 = (known after apply) 2026-02-02 00:02:34.251693 | orchestrator | + content_sha256 = (known after apply) 2026-02-02 00:02:34.251697 | orchestrator | + content_sha512 = (known after apply) 2026-02-02 00:02:34.251701 | orchestrator | + directory_permission = "0777" 2026-02-02 00:02:34.251704 | orchestrator | + file_permission = "0644" 2026-02-02 00:02:34.251712 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-02 00:02:34.251716 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.251720 | orchestrator | } 2026-02-02 00:02:34.251724 | orchestrator | 2026-02-02 00:02:34.251731 | orchestrator | # local_file.inventory will be created 2026-02-02 00:02:34.251735 | orchestrator | + resource "local_file" "inventory" { 2026-02-02 00:02:34.251739 | orchestrator | + content = (known after apply) 2026-02-02 00:02:34.251743 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-02 00:02:34.251746 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-02 00:02:34.251750 | orchestrator | + content_md5 = (known after apply) 2026-02-02 00:02:34.251754 | orchestrator | + content_sha1 = (known after apply) 2026-02-02 00:02:34.251758 | orchestrator | + content_sha256 = (known after apply) 2026-02-02 00:02:34.251762 | orchestrator | + content_sha512 = (known after apply) 2026-02-02 00:02:34.251766 | orchestrator | + directory_permission = "0777" 2026-02-02 00:02:34.251769 | orchestrator | + file_permission = "0644" 2026-02-02 00:02:34.251773 | orchestrator | + filename = "inventory.ci" 2026-02-02 00:02:34.251777 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.251781 | orchestrator | } 2026-02-02 00:02:34.251785 | orchestrator | 2026-02-02 00:02:34.251789 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-02 00:02:34.251792 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-02 00:02:34.251796 | orchestrator | + content = (sensitive value) 2026-02-02 00:02:34.251800 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-02 00:02:34.251804 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-02 00:02:34.251808 | orchestrator | + content_md5 = (known after apply) 2026-02-02 00:02:34.251812 | orchestrator | + content_sha1 = (known after apply) 2026-02-02 00:02:34.251815 | orchestrator | + content_sha256 = (known after apply) 2026-02-02 00:02:34.251824 | orchestrator | + content_sha512 = (known after apply) 2026-02-02 00:02:34.251828 | orchestrator | + directory_permission = "0700" 2026-02-02 00:02:34.251832 | orchestrator | + file_permission = "0600" 2026-02-02 00:02:34.251835 | orchestrator | + filename = ".id_rsa.ci" 2026-02-02 00:02:34.251839 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.251843 | orchestrator | } 2026-02-02 00:02:34.251847 | orchestrator | 2026-02-02 00:02:34.251851 | orchestrator | # null_resource.node_semaphore will be created 2026-02-02 00:02:34.251855 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-02 00:02:34.251859 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.251862 | orchestrator | } 2026-02-02 00:02:34.251866 | orchestrator | 2026-02-02 00:02:34.251870 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-02 00:02:34.251874 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-02 00:02:34.251878 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.251882 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.251886 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.251890 | orchestrator | + image_id = (known after apply) 2026-02-02 00:02:34.251893 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.251897 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-02 00:02:34.251901 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.251905 | orchestrator | + size = 80 2026-02-02 00:02:34.251909 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.251913 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.251916 | orchestrator | } 2026-02-02 00:02:34.251920 | orchestrator | 2026-02-02 00:02:34.251924 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-02 00:02:34.251928 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-02 00:02:34.251932 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.251936 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.251940 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.251947 | orchestrator | + image_id = (known after apply) 2026-02-02 00:02:34.251950 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.251954 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-02 00:02:34.251958 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.251962 | orchestrator | + size = 80 2026-02-02 00:02:34.251966 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.251970 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.251973 | orchestrator | } 2026-02-02 00:02:34.251977 | orchestrator | 2026-02-02 00:02:34.251981 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-02 00:02:34.251985 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-02 00:02:34.251989 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.251993 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.251997 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252000 | orchestrator | + image_id = (known after apply) 2026-02-02 00:02:34.252004 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.252008 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-02 00:02:34.252012 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252016 | orchestrator | + size = 80 2026-02-02 00:02:34.252019 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.252023 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.252027 | orchestrator | } 2026-02-02 00:02:34.252031 | orchestrator | 2026-02-02 00:02:34.252035 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-02 00:02:34.252039 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-02 00:02:34.252043 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.252046 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.252050 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252054 | orchestrator | + image_id = (known after apply) 2026-02-02 00:02:34.252058 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.252062 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-02 00:02:34.252065 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252069 | orchestrator | + size = 80 2026-02-02 00:02:34.252073 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.252077 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.252081 | orchestrator | } 2026-02-02 00:02:34.252084 | orchestrator | 2026-02-02 00:02:34.252088 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-02 00:02:34.252092 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-02 00:02:34.252096 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.252100 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.252104 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252108 | orchestrator | + image_id = (known after apply) 2026-02-02 00:02:34.252111 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.252117 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-02 00:02:34.252122 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252125 | orchestrator | + size = 80 2026-02-02 00:02:34.252129 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.252133 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.252137 | orchestrator | } 2026-02-02 00:02:34.252141 | orchestrator | 2026-02-02 00:02:34.252144 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-02 00:02:34.252148 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-02 00:02:34.252152 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.252156 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.252160 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252167 | orchestrator | + image_id = (known after apply) 2026-02-02 00:02:34.252171 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.252174 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-02 00:02:34.252178 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252182 | orchestrator | + size = 80 2026-02-02 00:02:34.252186 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.252190 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.252194 | orchestrator | } 2026-02-02 00:02:34.252197 | orchestrator | 2026-02-02 00:02:34.252201 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-02 00:02:34.252208 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-02 00:02:34.252212 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.252216 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.252219 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252223 | orchestrator | + image_id = (known after apply) 2026-02-02 00:02:34.252227 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.252231 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-02 00:02:34.252235 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252238 | orchestrator | + size = 80 2026-02-02 00:02:34.252242 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.252246 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.252250 | orchestrator | } 2026-02-02 00:02:34.252254 | orchestrator | 2026-02-02 00:02:34.252257 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-02 00:02:34.252263 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 00:02:34.252269 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.252275 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.252281 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252288 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.252294 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-02 00:02:34.252299 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252305 | orchestrator | + size = 20 2026-02-02 00:02:34.252310 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.252316 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.252322 | orchestrator | } 2026-02-02 00:02:34.252328 | orchestrator | 2026-02-02 00:02:34.252338 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-02 00:02:34.252347 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 00:02:34.252353 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.252359 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.252366 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252372 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.252378 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-02 00:02:34.252384 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252389 | orchestrator | + size = 20 2026-02-02 00:02:34.252395 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.252402 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.252408 | orchestrator | } 2026-02-02 00:02:34.252414 | orchestrator | 2026-02-02 00:02:34.252420 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-02 00:02:34.252425 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 00:02:34.252432 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.252454 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.252460 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252466 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.252472 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-02 00:02:34.252478 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252489 | orchestrator | + size = 20 2026-02-02 00:02:34.252496 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.252501 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.252507 | orchestrator | } 2026-02-02 00:02:34.252514 | orchestrator | 2026-02-02 00:02:34.252519 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-02 00:02:34.252525 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 00:02:34.252531 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.252537 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.252543 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252547 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.252550 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-02 00:02:34.252554 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252558 | orchestrator | + size = 20 2026-02-02 00:02:34.252562 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.252565 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.252569 | orchestrator | } 2026-02-02 00:02:34.252573 | orchestrator | 2026-02-02 00:02:34.252577 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-02 00:02:34.252580 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 00:02:34.252584 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.252588 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.252592 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252596 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.252599 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-02 00:02:34.252603 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252610 | orchestrator | + size = 20 2026-02-02 00:02:34.252614 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.252618 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.252622 | orchestrator | } 2026-02-02 00:02:34.252626 | orchestrator | 2026-02-02 00:02:34.252629 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-02 00:02:34.252633 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 00:02:34.252637 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.252641 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.252644 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252648 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.252652 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-02 00:02:34.252656 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252659 | orchestrator | + size = 20 2026-02-02 00:02:34.252663 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.252667 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.252671 | orchestrator | } 2026-02-02 00:02:34.252675 | orchestrator | 2026-02-02 00:02:34.252678 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-02 00:02:34.252682 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 00:02:34.252686 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.252690 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.252693 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252701 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.252705 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-02 00:02:34.252709 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252713 | orchestrator | + size = 20 2026-02-02 00:02:34.252716 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.252720 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.252724 | orchestrator | } 2026-02-02 00:02:34.252728 | orchestrator | 2026-02-02 00:02:34.252732 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-02 00:02:34.252736 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 00:02:34.252743 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.252747 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.252751 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252754 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.252758 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-02 00:02:34.252762 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252766 | orchestrator | + size = 20 2026-02-02 00:02:34.252770 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.252773 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.252777 | orchestrator | } 2026-02-02 00:02:34.252781 | orchestrator | 2026-02-02 00:02:34.252785 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-02 00:02:34.252789 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 00:02:34.252793 | orchestrator | + attachment = (known after apply) 2026-02-02 00:02:34.252796 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.252800 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252804 | orchestrator | + metadata = (known after apply) 2026-02-02 00:02:34.252808 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-02 00:02:34.252811 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252815 | orchestrator | + size = 20 2026-02-02 00:02:34.252819 | orchestrator | + volume_retype_policy = "never" 2026-02-02 00:02:34.252823 | orchestrator | + volume_type = "ssd" 2026-02-02 00:02:34.252826 | orchestrator | } 2026-02-02 00:02:34.252830 | orchestrator | 2026-02-02 00:02:34.252834 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-02 00:02:34.252838 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-02 00:02:34.252841 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-02 00:02:34.252845 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-02 00:02:34.252849 | orchestrator | + all_metadata = (known after apply) 2026-02-02 00:02:34.252853 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.252856 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.252860 | orchestrator | + config_drive = true 2026-02-02 00:02:34.252864 | orchestrator | + created = (known after apply) 2026-02-02 00:02:34.252868 | orchestrator | + flavor_id = (known after apply) 2026-02-02 00:02:34.252871 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-02 00:02:34.252875 | orchestrator | + force_delete = false 2026-02-02 00:02:34.252879 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-02 00:02:34.252883 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.252886 | orchestrator | + image_id = (known after apply) 2026-02-02 00:02:34.252890 | orchestrator | + image_name = (known after apply) 2026-02-02 00:02:34.252894 | orchestrator | + key_pair = "testbed" 2026-02-02 00:02:34.252898 | orchestrator | + name = "testbed-manager" 2026-02-02 00:02:34.252901 | orchestrator | + power_state = "active" 2026-02-02 00:02:34.252905 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.252909 | orchestrator | + security_groups = (known after apply) 2026-02-02 00:02:34.252913 | orchestrator | + stop_before_destroy = false 2026-02-02 00:02:34.252916 | orchestrator | + updated = (known after apply) 2026-02-02 00:02:34.252920 | orchestrator | + user_data = (sensitive value) 2026-02-02 00:02:34.252924 | orchestrator | 2026-02-02 00:02:34.252928 | orchestrator | + block_device { 2026-02-02 00:02:34.252932 | orchestrator | + boot_index = 0 2026-02-02 00:02:34.252936 | orchestrator | + delete_on_termination = false 2026-02-02 00:02:34.252945 | orchestrator | + destination_type = "volume" 2026-02-02 00:02:34.252949 | orchestrator | + multiattach = false 2026-02-02 00:02:34.252953 | orchestrator | + source_type = "volume" 2026-02-02 00:02:34.252957 | orchestrator | + uuid = (known after apply) 2026-02-02 00:02:34.252964 | orchestrator | } 2026-02-02 00:02:34.252968 | orchestrator | 2026-02-02 00:02:34.252972 | orchestrator | + network { 2026-02-02 00:02:34.252976 | orchestrator | + access_network = false 2026-02-02 00:02:34.252979 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-02 00:02:34.252983 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-02 00:02:34.252987 | orchestrator | + mac = (known after apply) 2026-02-02 00:02:34.252991 | orchestrator | + name = (known after apply) 2026-02-02 00:02:34.252994 | orchestrator | + port = (known after apply) 2026-02-02 00:02:34.252998 | orchestrator | + uuid = (known after apply) 2026-02-02 00:02:34.253002 | orchestrator | } 2026-02-02 00:02:34.253006 | orchestrator | } 2026-02-02 00:02:34.253010 | orchestrator | 2026-02-02 00:02:34.253013 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-02 00:02:34.253017 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-02 00:02:34.253021 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-02 00:02:34.253025 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-02 00:02:34.253028 | orchestrator | + all_metadata = (known after apply) 2026-02-02 00:02:34.253032 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.253036 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.253040 | orchestrator | + config_drive = true 2026-02-02 00:02:34.253044 | orchestrator | + created = (known after apply) 2026-02-02 00:02:34.253047 | orchestrator | + flavor_id = (known after apply) 2026-02-02 00:02:34.253051 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-02 00:02:34.253055 | orchestrator | + force_delete = false 2026-02-02 00:02:34.253059 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-02 00:02:34.253062 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.253066 | orchestrator | + image_id = (known after apply) 2026-02-02 00:02:34.253070 | orchestrator | + image_name = (known after apply) 2026-02-02 00:02:34.253074 | orchestrator | + key_pair = "testbed" 2026-02-02 00:02:34.253078 | orchestrator | + name = "testbed-node-0" 2026-02-02 00:02:34.253081 | orchestrator | + power_state = "active" 2026-02-02 00:02:34.253087 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.253091 | orchestrator | + security_groups = (known after apply) 2026-02-02 00:02:34.253095 | orchestrator | + stop_before_destroy = false 2026-02-02 00:02:34.253099 | orchestrator | + updated = (known after apply) 2026-02-02 00:02:34.253102 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-02 00:02:34.253106 | orchestrator | 2026-02-02 00:02:34.253110 | orchestrator | + block_device { 2026-02-02 00:02:34.253114 | orchestrator | + boot_index = 0 2026-02-02 00:02:34.253118 | orchestrator | + delete_on_termination = false 2026-02-02 00:02:34.253121 | orchestrator | + destination_type = "volume" 2026-02-02 00:02:34.253125 | orchestrator | + multiattach = false 2026-02-02 00:02:34.253129 | orchestrator | + source_type = "volume" 2026-02-02 00:02:34.253133 | orchestrator | + uuid = (known after apply) 2026-02-02 00:02:34.253136 | orchestrator | } 2026-02-02 00:02:34.253140 | orchestrator | 2026-02-02 00:02:34.253144 | orchestrator | + network { 2026-02-02 00:02:34.253148 | orchestrator | + access_network = false 2026-02-02 00:02:34.253152 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-02 00:02:34.253155 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-02 00:02:34.253159 | orchestrator | + mac = (known after apply) 2026-02-02 00:02:34.253163 | orchestrator | + name = (known after apply) 2026-02-02 00:02:34.253167 | orchestrator | + port = (known after apply) 2026-02-02 00:02:34.253171 | orchestrator | + uuid = (known after apply) 2026-02-02 00:02:34.253174 | orchestrator | } 2026-02-02 00:02:34.253178 | orchestrator | } 2026-02-02 00:02:34.253182 | orchestrator | 2026-02-02 00:02:34.253186 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-02 00:02:34.253190 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-02 00:02:34.253193 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-02 00:02:34.253200 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-02 00:02:34.253204 | orchestrator | + all_metadata = (known after apply) 2026-02-02 00:02:34.253207 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.253211 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.253215 | orchestrator | + config_drive = true 2026-02-02 00:02:34.253219 | orchestrator | + created = (known after apply) 2026-02-02 00:02:34.253222 | orchestrator | + flavor_id = (known after apply) 2026-02-02 00:02:34.253226 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-02 00:02:34.253230 | orchestrator | + force_delete = false 2026-02-02 00:02:34.253234 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-02 00:02:34.253238 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.253241 | orchestrator | + image_id = (known after apply) 2026-02-02 00:02:34.253245 | orchestrator | + image_name = (known after apply) 2026-02-02 00:02:34.253249 | orchestrator | + key_pair = "testbed" 2026-02-02 00:02:34.253253 | orchestrator | + name = "testbed-node-1" 2026-02-02 00:02:34.253256 | orchestrator | + power_state = "active" 2026-02-02 00:02:34.253260 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.253264 | orchestrator | + security_groups = (known after apply) 2026-02-02 00:02:34.253268 | orchestrator | + stop_before_destroy = false 2026-02-02 00:02:34.253272 | orchestrator | + updated = (known after apply) 2026-02-02 00:02:34.253275 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-02 00:02:34.253279 | orchestrator | 2026-02-02 00:02:34.253283 | orchestrator | + block_device { 2026-02-02 00:02:34.253287 | orchestrator | + boot_index = 0 2026-02-02 00:02:34.253291 | orchestrator | + delete_on_termination = false 2026-02-02 00:02:34.253294 | orchestrator | + destination_type = "volume" 2026-02-02 00:02:34.253298 | orchestrator | + multiattach = false 2026-02-02 00:02:34.253302 | orchestrator | + source_type = "volume" 2026-02-02 00:02:34.253306 | orchestrator | + uuid = (known after apply) 2026-02-02 00:02:34.253309 | orchestrator | } 2026-02-02 00:02:34.253313 | orchestrator | 2026-02-02 00:02:34.253317 | orchestrator | + network { 2026-02-02 00:02:34.253321 | orchestrator | + access_network = false 2026-02-02 00:02:34.253325 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-02 00:02:34.253328 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-02 00:02:34.253332 | orchestrator | + mac = (known after apply) 2026-02-02 00:02:34.253336 | orchestrator | + name = (known after apply) 2026-02-02 00:02:34.253340 | orchestrator | + port = (known after apply) 2026-02-02 00:02:34.253343 | orchestrator | + uuid = (known after apply) 2026-02-02 00:02:34.253347 | orchestrator | } 2026-02-02 00:02:34.253351 | orchestrator | } 2026-02-02 00:02:34.253355 | orchestrator | 2026-02-02 00:02:34.253359 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-02 00:02:34.253362 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-02 00:02:34.253366 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-02 00:02:34.253370 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-02 00:02:34.253374 | orchestrator | + all_metadata = (known after apply) 2026-02-02 00:02:34.253378 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.253384 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.253388 | orchestrator | + config_drive = true 2026-02-02 00:02:34.253392 | orchestrator | + created = (known after apply) 2026-02-02 00:02:34.253395 | orchestrator | + flavor_id = (known after apply) 2026-02-02 00:02:34.253399 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-02 00:02:34.253403 | orchestrator | + force_delete = false 2026-02-02 00:02:34.253407 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-02 00:02:34.253411 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.253414 | orchestrator | + image_id = (known after apply) 2026-02-02 00:02:34.253421 | orchestrator | + image_name = (known after apply) 2026-02-02 00:02:34.253425 | orchestrator | + key_pair = "testbed" 2026-02-02 00:02:34.253429 | orchestrator | + name = "testbed-node-2" 2026-02-02 00:02:34.253466 | orchestrator | + power_state = "active" 2026-02-02 00:02:34.253472 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.253475 | orchestrator | + security_groups = (known after apply) 2026-02-02 00:02:34.253479 | orchestrator | + stop_before_destroy = false 2026-02-02 00:02:34.253483 | orchestrator | + updated = (known after apply) 2026-02-02 00:02:34.253487 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-02 00:02:34.253491 | orchestrator | 2026-02-02 00:02:34.253494 | orchestrator | + block_device { 2026-02-02 00:02:34.253498 | orchestrator | + boot_index = 0 2026-02-02 00:02:34.253502 | orchestrator | + delete_on_termination = false 2026-02-02 00:02:34.253506 | orchestrator | + destination_type = "volume" 2026-02-02 00:02:34.253514 | orchestrator | + multiattach = false 2026-02-02 00:02:34.253518 | orchestrator | + source_type = "volume" 2026-02-02 00:02:34.253522 | orchestrator | + uuid = (known after apply) 2026-02-02 00:02:34.253525 | orchestrator | } 2026-02-02 00:02:34.253529 | orchestrator | 2026-02-02 00:02:34.253533 | orchestrator | + network { 2026-02-02 00:02:34.253537 | orchestrator | + access_network = false 2026-02-02 00:02:34.253541 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-02 00:02:34.253544 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-02 00:02:34.253548 | orchestrator | + mac = (known after apply) 2026-02-02 00:02:34.253552 | orchestrator | + name = (known after apply) 2026-02-02 00:02:34.253556 | orchestrator | + port = (known after apply) 2026-02-02 00:02:34.253559 | orchestrator | + uuid = (known after apply) 2026-02-02 00:02:34.253563 | orchestrator | } 2026-02-02 00:02:34.253567 | orchestrator | } 2026-02-02 00:02:34.253571 | orchestrator | 2026-02-02 00:02:34.253574 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-02 00:02:34.253578 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-02 00:02:34.253582 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-02 00:02:34.253586 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-02 00:02:34.253589 | orchestrator | + all_metadata = (known after apply) 2026-02-02 00:02:34.253593 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.253597 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.253600 | orchestrator | + config_drive = true 2026-02-02 00:02:34.253604 | orchestrator | + created = (known after apply) 2026-02-02 00:02:34.253608 | orchestrator | + flavor_id = (known after apply) 2026-02-02 00:02:34.253612 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-02 00:02:34.253615 | orchestrator | + force_delete = false 2026-02-02 00:02:34.253619 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-02 00:02:34.253623 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.253627 | orchestrator | + image_id = (known after apply) 2026-02-02 00:02:34.253630 | orchestrator | + image_name = (known after apply) 2026-02-02 00:02:34.253634 | orchestrator | + key_pair = "testbed" 2026-02-02 00:02:34.253638 | orchestrator | + name = "testbed-node-3" 2026-02-02 00:02:34.253642 | orchestrator | + power_state = "active" 2026-02-02 00:02:34.253645 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.253649 | orchestrator | + security_groups = (known after apply) 2026-02-02 00:02:34.253653 | orchestrator | + stop_before_destroy = false 2026-02-02 00:02:34.253657 | orchestrator | + updated = (known after apply) 2026-02-02 00:02:34.253660 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-02 00:02:34.253664 | orchestrator | 2026-02-02 00:02:34.253668 | orchestrator | + block_device { 2026-02-02 00:02:34.253674 | orchestrator | + boot_index = 0 2026-02-02 00:02:34.253678 | orchestrator | + delete_on_termination = false 2026-02-02 00:02:34.253682 | orchestrator | + destination_type = "volume" 2026-02-02 00:02:34.253689 | orchestrator | + multiattach = false 2026-02-02 00:02:34.253693 | orchestrator | + source_type = "volume" 2026-02-02 00:02:34.253697 | orchestrator | + uuid = (known after apply) 2026-02-02 00:02:34.253700 | orchestrator | } 2026-02-02 00:02:34.253704 | orchestrator | 2026-02-02 00:02:34.253708 | orchestrator | + network { 2026-02-02 00:02:34.253712 | orchestrator | + access_network = false 2026-02-02 00:02:34.253715 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-02 00:02:34.253719 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-02 00:02:34.253723 | orchestrator | + mac = (known after apply) 2026-02-02 00:02:34.253727 | orchestrator | + name = (known after apply) 2026-02-02 00:02:34.253730 | orchestrator | + port = (known after apply) 2026-02-02 00:02:34.253734 | orchestrator | + uuid = (known after apply) 2026-02-02 00:02:34.253738 | orchestrator | } 2026-02-02 00:02:34.253742 | orchestrator | } 2026-02-02 00:02:34.253745 | orchestrator | 2026-02-02 00:02:34.253749 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-02 00:02:34.253753 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-02 00:02:34.253757 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-02 00:02:34.253760 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-02 00:02:34.253764 | orchestrator | + all_metadata = (known after apply) 2026-02-02 00:02:34.253768 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.253772 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.253775 | orchestrator | + config_drive = true 2026-02-02 00:02:34.253779 | orchestrator | + created = (known after apply) 2026-02-02 00:02:34.253783 | orchestrator | + flavor_id = (known after apply) 2026-02-02 00:02:34.253787 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-02 00:02:34.253790 | orchestrator | + force_delete = false 2026-02-02 00:02:34.253794 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-02 00:02:34.253798 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.253802 | orchestrator | + image_id = (known after apply) 2026-02-02 00:02:34.253805 | orchestrator | + image_name = (known after apply) 2026-02-02 00:02:34.253809 | orchestrator | + key_pair = "testbed" 2026-02-02 00:02:34.253813 | orchestrator | + name = "testbed-node-4" 2026-02-02 00:02:34.253817 | orchestrator | + power_state = "active" 2026-02-02 00:02:34.253820 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.253824 | orchestrator | + security_groups = (known after apply) 2026-02-02 00:02:34.253828 | orchestrator | + stop_before_destroy = false 2026-02-02 00:02:34.253832 | orchestrator | + updated = (known after apply) 2026-02-02 00:02:34.253835 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-02 00:02:34.253839 | orchestrator | 2026-02-02 00:02:34.253843 | orchestrator | + block_device { 2026-02-02 00:02:34.253847 | orchestrator | + boot_index = 0 2026-02-02 00:02:34.253850 | orchestrator | + delete_on_termination = false 2026-02-02 00:02:34.253854 | orchestrator | + destination_type = "volume" 2026-02-02 00:02:34.253858 | orchestrator | + multiattach = false 2026-02-02 00:02:34.253862 | orchestrator | + source_type = "volume" 2026-02-02 00:02:34.253865 | orchestrator | + uuid = (known after apply) 2026-02-02 00:02:34.253869 | orchestrator | } 2026-02-02 00:02:34.253873 | orchestrator | 2026-02-02 00:02:34.253877 | orchestrator | + network { 2026-02-02 00:02:34.253880 | orchestrator | + access_network = false 2026-02-02 00:02:34.253884 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-02 00:02:34.253888 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-02 00:02:34.253892 | orchestrator | + mac = (known after apply) 2026-02-02 00:02:34.253895 | orchestrator | + name = (known after apply) 2026-02-02 00:02:34.253899 | orchestrator | + port = (known after apply) 2026-02-02 00:02:34.253905 | orchestrator | + uuid = (known after apply) 2026-02-02 00:02:34.253909 | orchestrator | } 2026-02-02 00:02:34.253913 | orchestrator | } 2026-02-02 00:02:34.253919 | orchestrator | 2026-02-02 00:02:34.253923 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-02 00:02:34.253927 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-02 00:02:34.253931 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-02 00:02:34.253934 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-02 00:02:34.253938 | orchestrator | + all_metadata = (known after apply) 2026-02-02 00:02:34.253942 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.253946 | orchestrator | + availability_zone = "nova" 2026-02-02 00:02:34.253949 | orchestrator | + config_drive = true 2026-02-02 00:02:34.253953 | orchestrator | + created = (known after apply) 2026-02-02 00:02:34.253957 | orchestrator | + flavor_id = (known after apply) 2026-02-02 00:02:34.253961 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-02 00:02:34.253964 | orchestrator | + force_delete = false 2026-02-02 00:02:34.253971 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-02 00:02:34.253975 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.253978 | orchestrator | + image_id = (known after apply) 2026-02-02 00:02:34.253982 | orchestrator | + image_name = (known after apply) 2026-02-02 00:02:34.253986 | orchestrator | + key_pair = "testbed" 2026-02-02 00:02:34.253990 | orchestrator | + name = "testbed-node-5" 2026-02-02 00:02:34.253993 | orchestrator | + power_state = "active" 2026-02-02 00:02:34.253997 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.254001 | orchestrator | + security_groups = (known after apply) 2026-02-02 00:02:34.254005 | orchestrator | + stop_before_destroy = false 2026-02-02 00:02:34.254008 | orchestrator | + updated = (known after apply) 2026-02-02 00:02:34.254026 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-02 00:02:34.254031 | orchestrator | 2026-02-02 00:02:34.254068 | orchestrator | + block_device { 2026-02-02 00:02:34.254073 | orchestrator | + boot_index = 0 2026-02-02 00:02:34.254076 | orchestrator | + delete_on_termination = false 2026-02-02 00:02:34.254083 | orchestrator | + destination_type = "volume" 2026-02-02 00:02:34.254087 | orchestrator | + multiattach = false 2026-02-02 00:02:34.254094 | orchestrator | + source_type = "volume" 2026-02-02 00:02:34.254098 | orchestrator | + uuid = (known after apply) 2026-02-02 00:02:34.254102 | orchestrator | } 2026-02-02 00:02:34.254106 | orchestrator | 2026-02-02 00:02:34.254109 | orchestrator | + network { 2026-02-02 00:02:34.254113 | orchestrator | + access_network = false 2026-02-02 00:02:34.254117 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-02 00:02:34.254121 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-02 00:02:34.254125 | orchestrator | + mac = (known after apply) 2026-02-02 00:02:34.254129 | orchestrator | + name = (known after apply) 2026-02-02 00:02:34.254133 | orchestrator | + port = (known after apply) 2026-02-02 00:02:34.254136 | orchestrator | + uuid = (known after apply) 2026-02-02 00:02:34.254140 | orchestrator | } 2026-02-02 00:02:34.254144 | orchestrator | } 2026-02-02 00:02:34.254148 | orchestrator | 2026-02-02 00:02:34.254152 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-02 00:02:34.254156 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-02 00:02:34.254160 | orchestrator | + fingerprint = (known after apply) 2026-02-02 00:02:34.254163 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.254167 | orchestrator | + name = "testbed" 2026-02-02 00:02:34.254171 | orchestrator | + private_key = (sensitive value) 2026-02-02 00:02:34.254175 | orchestrator | + public_key = (known after apply) 2026-02-02 00:02:34.254178 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.254182 | orchestrator | + user_id = (known after apply) 2026-02-02 00:02:34.254186 | orchestrator | } 2026-02-02 00:02:34.254190 | orchestrator | 2026-02-02 00:02:34.254194 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-02 00:02:34.254198 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 00:02:34.254216 | orchestrator | + device = (known after apply) 2026-02-02 00:02:34.254223 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.254229 | orchestrator | + instance_id = (known after apply) 2026-02-02 00:02:34.254235 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.254240 | orchestrator | + volume_id = (known after apply) 2026-02-02 00:02:34.254246 | orchestrator | } 2026-02-02 00:02:34.254252 | orchestrator | 2026-02-02 00:02:34.254259 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-02 00:02:34.254265 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 00:02:34.254272 | orchestrator | + device = (known after apply) 2026-02-02 00:02:34.254278 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.254284 | orchestrator | + instance_id = (known after apply) 2026-02-02 00:02:34.254290 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.254295 | orchestrator | + volume_id = (known after apply) 2026-02-02 00:02:34.254301 | orchestrator | } 2026-02-02 00:02:34.254307 | orchestrator | 2026-02-02 00:02:34.254313 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-02 00:02:34.254319 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 00:02:34.254325 | orchestrator | + device = (known after apply) 2026-02-02 00:02:34.254330 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.254337 | orchestrator | + instance_id = (known after apply) 2026-02-02 00:02:34.254342 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.254348 | orchestrator | + volume_id = (known after apply) 2026-02-02 00:02:34.254354 | orchestrator | } 2026-02-02 00:02:34.254361 | orchestrator | 2026-02-02 00:02:34.254366 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-02 00:02:34.254373 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 00:02:34.254378 | orchestrator | + device = (known after apply) 2026-02-02 00:02:34.254384 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.254394 | orchestrator | + instance_id = (known after apply) 2026-02-02 00:02:34.254401 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.254407 | orchestrator | + volume_id = (known after apply) 2026-02-02 00:02:34.254413 | orchestrator | } 2026-02-02 00:02:34.254419 | orchestrator | 2026-02-02 00:02:34.254425 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-02 00:02:34.254431 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 00:02:34.254455 | orchestrator | + device = (known after apply) 2026-02-02 00:02:34.254462 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.254468 | orchestrator | + instance_id = (known after apply) 2026-02-02 00:02:34.254478 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.254494 | orchestrator | + volume_id = (known after apply) 2026-02-02 00:02:34.254500 | orchestrator | } 2026-02-02 00:02:34.254506 | orchestrator | 2026-02-02 00:02:34.254512 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-02 00:02:34.254518 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 00:02:34.254525 | orchestrator | + device = (known after apply) 2026-02-02 00:02:34.254531 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.254537 | orchestrator | + instance_id = (known after apply) 2026-02-02 00:02:34.254543 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.254549 | orchestrator | + volume_id = (known after apply) 2026-02-02 00:02:34.254557 | orchestrator | } 2026-02-02 00:02:34.254561 | orchestrator | 2026-02-02 00:02:34.254565 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-02 00:02:34.254568 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 00:02:34.254572 | orchestrator | + device = (known after apply) 2026-02-02 00:02:34.254576 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.254580 | orchestrator | + instance_id = (known after apply) 2026-02-02 00:02:34.254583 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.254593 | orchestrator | + volume_id = (known after apply) 2026-02-02 00:02:34.254597 | orchestrator | } 2026-02-02 00:02:34.254601 | orchestrator | 2026-02-02 00:02:34.254604 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-02 00:02:34.254608 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 00:02:34.254612 | orchestrator | + device = (known after apply) 2026-02-02 00:02:34.254616 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.254620 | orchestrator | + instance_id = (known after apply) 2026-02-02 00:02:34.254624 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.254627 | orchestrator | + volume_id = (known after apply) 2026-02-02 00:02:34.254631 | orchestrator | } 2026-02-02 00:02:34.254635 | orchestrator | 2026-02-02 00:02:34.254639 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-02 00:02:34.254643 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 00:02:34.254647 | orchestrator | + device = (known after apply) 2026-02-02 00:02:34.254650 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.254654 | orchestrator | + instance_id = (known after apply) 2026-02-02 00:02:34.254658 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.254662 | orchestrator | + volume_id = (known after apply) 2026-02-02 00:02:34.254666 | orchestrator | } 2026-02-02 00:02:34.254669 | orchestrator | 2026-02-02 00:02:34.254673 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-02 00:02:34.254678 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-02 00:02:34.254682 | orchestrator | + fixed_ip = (known after apply) 2026-02-02 00:02:34.254685 | orchestrator | + floating_ip = (known after apply) 2026-02-02 00:02:34.254689 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.254693 | orchestrator | + port_id = (known after apply) 2026-02-02 00:02:34.254697 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.254701 | orchestrator | } 2026-02-02 00:02:34.254704 | orchestrator | 2026-02-02 00:02:34.254708 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-02 00:02:34.254712 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-02 00:02:34.254716 | orchestrator | + address = (known after apply) 2026-02-02 00:02:34.254720 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.254724 | orchestrator | + dns_domain = (known after apply) 2026-02-02 00:02:34.254728 | orchestrator | + dns_name = (known after apply) 2026-02-02 00:02:34.254733 | orchestrator | + fixed_ip = (known after apply) 2026-02-02 00:02:34.254739 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.254745 | orchestrator | + pool = "public" 2026-02-02 00:02:34.254752 | orchestrator | + port_id = (known after apply) 2026-02-02 00:02:34.254757 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.254763 | orchestrator | + subnet_id = (known after apply) 2026-02-02 00:02:34.254769 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.254775 | orchestrator | } 2026-02-02 00:02:34.254780 | orchestrator | 2026-02-02 00:02:34.254786 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-02 00:02:34.254792 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-02 00:02:34.254798 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 00:02:34.254804 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.254810 | orchestrator | + availability_zone_hints = [ 2026-02-02 00:02:34.254816 | orchestrator | + "nova", 2026-02-02 00:02:34.254823 | orchestrator | ] 2026-02-02 00:02:34.254828 | orchestrator | + dns_domain = (known after apply) 2026-02-02 00:02:34.254835 | orchestrator | + external = (known after apply) 2026-02-02 00:02:34.254840 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.254846 | orchestrator | + mtu = (known after apply) 2026-02-02 00:02:34.254852 | orchestrator | + name = "net-testbed-management" 2026-02-02 00:02:34.254858 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 00:02:34.254870 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 00:02:34.254875 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.254881 | orchestrator | + shared = (known after apply) 2026-02-02 00:02:34.254886 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.254893 | orchestrator | + transparent_vlan = (known after apply) 2026-02-02 00:02:34.254899 | orchestrator | 2026-02-02 00:02:34.254906 | orchestrator | + segments (known after apply) 2026-02-02 00:02:34.254912 | orchestrator | } 2026-02-02 00:02:34.254919 | orchestrator | 2026-02-02 00:02:34.254925 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-02 00:02:34.254932 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-02 00:02:34.254938 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 00:02:34.254944 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-02 00:02:34.254951 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-02 00:02:34.254962 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.254968 | orchestrator | + device_id = (known after apply) 2026-02-02 00:02:34.254975 | orchestrator | + device_owner = (known after apply) 2026-02-02 00:02:34.254981 | orchestrator | + dns_assignment = (known after apply) 2026-02-02 00:02:34.254988 | orchestrator | + dns_name = (known after apply) 2026-02-02 00:02:34.255000 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.255006 | orchestrator | + mac_address = (known after apply) 2026-02-02 00:02:34.255012 | orchestrator | + network_id = (known after apply) 2026-02-02 00:02:34.255018 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 00:02:34.255024 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 00:02:34.255029 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.255035 | orchestrator | + security_group_ids = (known after apply) 2026-02-02 00:02:34.255040 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.255046 | orchestrator | 2026-02-02 00:02:34.255051 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.255056 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-02 00:02:34.255063 | orchestrator | } 2026-02-02 00:02:34.255068 | orchestrator | 2026-02-02 00:02:34.255074 | orchestrator | + binding (known after apply) 2026-02-02 00:02:34.255080 | orchestrator | 2026-02-02 00:02:34.255086 | orchestrator | + fixed_ip { 2026-02-02 00:02:34.255091 | orchestrator | + ip_address = "192.168.16.5" 2026-02-02 00:02:34.255098 | orchestrator | + subnet_id = (known after apply) 2026-02-02 00:02:34.255104 | orchestrator | } 2026-02-02 00:02:34.255110 | orchestrator | } 2026-02-02 00:02:34.255116 | orchestrator | 2026-02-02 00:02:34.255122 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-02 00:02:34.255129 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-02 00:02:34.255136 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 00:02:34.255142 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-02 00:02:34.255148 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-02 00:02:34.255155 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.255162 | orchestrator | + device_id = (known after apply) 2026-02-02 00:02:34.255168 | orchestrator | + device_owner = (known after apply) 2026-02-02 00:02:34.255175 | orchestrator | + dns_assignment = (known after apply) 2026-02-02 00:02:34.255181 | orchestrator | + dns_name = (known after apply) 2026-02-02 00:02:34.255187 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.255194 | orchestrator | + mac_address = (known after apply) 2026-02-02 00:02:34.255200 | orchestrator | + network_id = (known after apply) 2026-02-02 00:02:34.255206 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 00:02:34.255213 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 00:02:34.255219 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.255231 | orchestrator | + security_group_ids = (known after apply) 2026-02-02 00:02:34.255238 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.255244 | orchestrator | 2026-02-02 00:02:34.255250 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.255257 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-02 00:02:34.255263 | orchestrator | } 2026-02-02 00:02:34.255270 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.255275 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-02 00:02:34.255281 | orchestrator | } 2026-02-02 00:02:34.255287 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.255294 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-02 00:02:34.255300 | orchestrator | } 2026-02-02 00:02:34.255307 | orchestrator | 2026-02-02 00:02:34.255314 | orchestrator | + binding (known after apply) 2026-02-02 00:02:34.255321 | orchestrator | 2026-02-02 00:02:34.255327 | orchestrator | + fixed_ip { 2026-02-02 00:02:34.255334 | orchestrator | + ip_address = "192.168.16.10" 2026-02-02 00:02:34.255341 | orchestrator | + subnet_id = (known after apply) 2026-02-02 00:02:34.255348 | orchestrator | } 2026-02-02 00:02:34.255355 | orchestrator | } 2026-02-02 00:02:34.255362 | orchestrator | 2026-02-02 00:02:34.255369 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-02 00:02:34.255375 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-02 00:02:34.255382 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 00:02:34.255388 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-02 00:02:34.255395 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-02 00:02:34.255401 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.255407 | orchestrator | + device_id = (known after apply) 2026-02-02 00:02:34.255414 | orchestrator | + device_owner = (known after apply) 2026-02-02 00:02:34.255420 | orchestrator | + dns_assignment = (known after apply) 2026-02-02 00:02:34.255427 | orchestrator | + dns_name = (known after apply) 2026-02-02 00:02:34.255472 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.255480 | orchestrator | + mac_address = (known after apply) 2026-02-02 00:02:34.255487 | orchestrator | + network_id = (known after apply) 2026-02-02 00:02:34.255493 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 00:02:34.255500 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 00:02:34.255506 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.255513 | orchestrator | + security_group_ids = (known after apply) 2026-02-02 00:02:34.255519 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.255526 | orchestrator | 2026-02-02 00:02:34.255532 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.255539 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-02 00:02:34.255545 | orchestrator | } 2026-02-02 00:02:34.255552 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.255558 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-02 00:02:34.255565 | orchestrator | } 2026-02-02 00:02:34.255571 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.255578 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-02 00:02:34.255584 | orchestrator | } 2026-02-02 00:02:34.255590 | orchestrator | 2026-02-02 00:02:34.255597 | orchestrator | + binding (known after apply) 2026-02-02 00:02:34.255603 | orchestrator | 2026-02-02 00:02:34.255609 | orchestrator | + fixed_ip { 2026-02-02 00:02:34.255614 | orchestrator | + ip_address = "192.168.16.11" 2026-02-02 00:02:34.255620 | orchestrator | + subnet_id = (known after apply) 2026-02-02 00:02:34.255626 | orchestrator | } 2026-02-02 00:02:34.255631 | orchestrator | } 2026-02-02 00:02:34.255637 | orchestrator | 2026-02-02 00:02:34.255644 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-02 00:02:34.255649 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-02 00:02:34.255656 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 00:02:34.255661 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-02 00:02:34.255667 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-02 00:02:34.255673 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.255688 | orchestrator | + device_id = (known after apply) 2026-02-02 00:02:34.255694 | orchestrator | + device_owner = (known after apply) 2026-02-02 00:02:34.255699 | orchestrator | + dns_assignment = (known after apply) 2026-02-02 00:02:34.255706 | orchestrator | + dns_name = (known after apply) 2026-02-02 00:02:34.255716 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.255730 | orchestrator | + mac_address = (known after apply) 2026-02-02 00:02:34.255737 | orchestrator | + network_id = (known after apply) 2026-02-02 00:02:34.255744 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 00:02:34.255750 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 00:02:34.255756 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.255762 | orchestrator | + security_group_ids = (known after apply) 2026-02-02 00:02:34.255768 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.255774 | orchestrator | 2026-02-02 00:02:34.255781 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.255787 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-02 00:02:34.255793 | orchestrator | } 2026-02-02 00:02:34.255799 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.255805 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-02 00:02:34.255811 | orchestrator | } 2026-02-02 00:02:34.255816 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.255822 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-02 00:02:34.255828 | orchestrator | } 2026-02-02 00:02:34.255834 | orchestrator | 2026-02-02 00:02:34.255840 | orchestrator | + binding (known after apply) 2026-02-02 00:02:34.255846 | orchestrator | 2026-02-02 00:02:34.255852 | orchestrator | + fixed_ip { 2026-02-02 00:02:34.255857 | orchestrator | + ip_address = "192.168.16.12" 2026-02-02 00:02:34.255862 | orchestrator | + subnet_id = (known after apply) 2026-02-02 00:02:34.255868 | orchestrator | } 2026-02-02 00:02:34.255874 | orchestrator | } 2026-02-02 00:02:34.271472 | orchestrator | 2026-02-02 00:02:34.271541 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-02 00:02:34.271550 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-02 00:02:34.271556 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 00:02:34.271562 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-02 00:02:34.271568 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-02 00:02:34.271573 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.271579 | orchestrator | + device_id = (known after apply) 2026-02-02 00:02:34.271584 | orchestrator | + device_owner = (known after apply) 2026-02-02 00:02:34.271589 | orchestrator | + dns_assignment = (known after apply) 2026-02-02 00:02:34.271594 | orchestrator | + dns_name = (known after apply) 2026-02-02 00:02:34.271599 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.271604 | orchestrator | + mac_address = (known after apply) 2026-02-02 00:02:34.271609 | orchestrator | + network_id = (known after apply) 2026-02-02 00:02:34.271614 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 00:02:34.271620 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 00:02:34.271625 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.271630 | orchestrator | + security_group_ids = (known after apply) 2026-02-02 00:02:34.271635 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.271640 | orchestrator | 2026-02-02 00:02:34.271646 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.271651 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-02 00:02:34.271656 | orchestrator | } 2026-02-02 00:02:34.271662 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.271667 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-02 00:02:34.271672 | orchestrator | } 2026-02-02 00:02:34.271677 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.271682 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-02 00:02:34.271687 | orchestrator | } 2026-02-02 00:02:34.271693 | orchestrator | 2026-02-02 00:02:34.271711 | orchestrator | + binding (known after apply) 2026-02-02 00:02:34.271717 | orchestrator | 2026-02-02 00:02:34.271722 | orchestrator | + fixed_ip { 2026-02-02 00:02:34.271728 | orchestrator | + ip_address = "192.168.16.13" 2026-02-02 00:02:34.271733 | orchestrator | + subnet_id = (known after apply) 2026-02-02 00:02:34.271738 | orchestrator | } 2026-02-02 00:02:34.271743 | orchestrator | } 2026-02-02 00:02:34.271749 | orchestrator | 2026-02-02 00:02:34.271754 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-02 00:02:34.271759 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-02 00:02:34.271764 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 00:02:34.271770 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-02 00:02:34.271775 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-02 00:02:34.271780 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.271785 | orchestrator | + device_id = (known after apply) 2026-02-02 00:02:34.271790 | orchestrator | + device_owner = (known after apply) 2026-02-02 00:02:34.271795 | orchestrator | + dns_assignment = (known after apply) 2026-02-02 00:02:34.271801 | orchestrator | + dns_name = (known after apply) 2026-02-02 00:02:34.271806 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.271811 | orchestrator | + mac_address = (known after apply) 2026-02-02 00:02:34.271816 | orchestrator | + network_id = (known after apply) 2026-02-02 00:02:34.271822 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 00:02:34.271827 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 00:02:34.271832 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.271837 | orchestrator | + security_group_ids = (known after apply) 2026-02-02 00:02:34.271842 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.271850 | orchestrator | 2026-02-02 00:02:34.271855 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.271860 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-02 00:02:34.271866 | orchestrator | } 2026-02-02 00:02:34.271871 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.271876 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-02 00:02:34.271881 | orchestrator | } 2026-02-02 00:02:34.271886 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.271891 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-02 00:02:34.271896 | orchestrator | } 2026-02-02 00:02:34.271901 | orchestrator | 2026-02-02 00:02:34.271907 | orchestrator | + binding (known after apply) 2026-02-02 00:02:34.271912 | orchestrator | 2026-02-02 00:02:34.271917 | orchestrator | + fixed_ip { 2026-02-02 00:02:34.271922 | orchestrator | + ip_address = "192.168.16.14" 2026-02-02 00:02:34.271927 | orchestrator | + subnet_id = (known after apply) 2026-02-02 00:02:34.271932 | orchestrator | } 2026-02-02 00:02:34.271937 | orchestrator | } 2026-02-02 00:02:34.271943 | orchestrator | 2026-02-02 00:02:34.271948 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-02 00:02:34.271953 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-02 00:02:34.271958 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 00:02:34.271963 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-02 00:02:34.271969 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-02 00:02:34.271974 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.271979 | orchestrator | + device_id = (known after apply) 2026-02-02 00:02:34.271984 | orchestrator | + device_owner = (known after apply) 2026-02-02 00:02:34.271989 | orchestrator | + dns_assignment = (known after apply) 2026-02-02 00:02:34.271994 | orchestrator | + dns_name = (known after apply) 2026-02-02 00:02:34.272000 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.272005 | orchestrator | + mac_address = (known after apply) 2026-02-02 00:02:34.272010 | orchestrator | + network_id = (known after apply) 2026-02-02 00:02:34.272015 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 00:02:34.272020 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 00:02:34.272030 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.272035 | orchestrator | + security_group_ids = (known after apply) 2026-02-02 00:02:34.272040 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.272045 | orchestrator | 2026-02-02 00:02:34.272051 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.272056 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-02 00:02:34.272061 | orchestrator | } 2026-02-02 00:02:34.272066 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.272071 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-02 00:02:34.272077 | orchestrator | } 2026-02-02 00:02:34.272082 | orchestrator | + allowed_address_pairs { 2026-02-02 00:02:34.272087 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-02 00:02:34.272092 | orchestrator | } 2026-02-02 00:02:34.272097 | orchestrator | 2026-02-02 00:02:34.272119 | orchestrator | + binding (known after apply) 2026-02-02 00:02:34.272125 | orchestrator | 2026-02-02 00:02:34.272131 | orchestrator | + fixed_ip { 2026-02-02 00:02:34.272136 | orchestrator | + ip_address = "192.168.16.15" 2026-02-02 00:02:34.272141 | orchestrator | + subnet_id = (known after apply) 2026-02-02 00:02:34.272146 | orchestrator | } 2026-02-02 00:02:34.272152 | orchestrator | } 2026-02-02 00:02:34.272157 | orchestrator | 2026-02-02 00:02:34.272162 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-02 00:02:34.272167 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-02 00:02:34.272172 | orchestrator | + force_destroy = false 2026-02-02 00:02:34.272177 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.272183 | orchestrator | + port_id = (known after apply) 2026-02-02 00:02:34.272188 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.272193 | orchestrator | + router_id = (known after apply) 2026-02-02 00:02:34.272198 | orchestrator | + subnet_id = (known after apply) 2026-02-02 00:02:34.272203 | orchestrator | } 2026-02-02 00:02:34.272208 | orchestrator | 2026-02-02 00:02:34.272213 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-02 00:02:34.272219 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-02 00:02:34.272224 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 00:02:34.272229 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.272234 | orchestrator | + availability_zone_hints = [ 2026-02-02 00:02:34.272239 | orchestrator | + "nova", 2026-02-02 00:02:34.272245 | orchestrator | ] 2026-02-02 00:02:34.272250 | orchestrator | + distributed = (known after apply) 2026-02-02 00:02:34.272255 | orchestrator | + enable_snat = (known after apply) 2026-02-02 00:02:34.272260 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-02 00:02:34.272265 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-02 00:02:34.272271 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.272276 | orchestrator | + name = "testbed" 2026-02-02 00:02:34.272281 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.272286 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.272291 | orchestrator | 2026-02-02 00:02:34.272296 | orchestrator | + external_fixed_ip (known after apply) 2026-02-02 00:02:34.272302 | orchestrator | } 2026-02-02 00:02:34.272307 | orchestrator | 2026-02-02 00:02:34.272312 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-02 00:02:34.272318 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-02 00:02:34.272323 | orchestrator | + description = "ssh" 2026-02-02 00:02:34.272329 | orchestrator | + direction = "ingress" 2026-02-02 00:02:34.272334 | orchestrator | + ethertype = "IPv4" 2026-02-02 00:02:34.272339 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.272344 | orchestrator | + port_range_max = 22 2026-02-02 00:02:34.272349 | orchestrator | + port_range_min = 22 2026-02-02 00:02:34.272354 | orchestrator | + protocol = "tcp" 2026-02-02 00:02:34.272359 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.272368 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 00:02:34.272373 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 00:02:34.272378 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-02 00:02:34.272384 | orchestrator | + security_group_id = (known after apply) 2026-02-02 00:02:34.272389 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.272394 | orchestrator | } 2026-02-02 00:02:34.272399 | orchestrator | 2026-02-02 00:02:34.272405 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-02 00:02:34.272410 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-02 00:02:34.272415 | orchestrator | + description = "wireguard" 2026-02-02 00:02:34.272420 | orchestrator | + direction = "ingress" 2026-02-02 00:02:34.272425 | orchestrator | + ethertype = "IPv4" 2026-02-02 00:02:34.272430 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.272485 | orchestrator | + port_range_max = 51820 2026-02-02 00:02:34.272495 | orchestrator | + port_range_min = 51820 2026-02-02 00:02:34.272502 | orchestrator | + protocol = "udp" 2026-02-02 00:02:34.272507 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.272512 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 00:02:34.272518 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 00:02:34.272523 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-02 00:02:34.272528 | orchestrator | + security_group_id = (known after apply) 2026-02-02 00:02:34.272533 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.272538 | orchestrator | } 2026-02-02 00:02:34.272543 | orchestrator | 2026-02-02 00:02:34.272548 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-02 00:02:34.272554 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-02 00:02:34.272559 | orchestrator | + direction = "ingress" 2026-02-02 00:02:34.272564 | orchestrator | + ethertype = "IPv4" 2026-02-02 00:02:34.272569 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.272574 | orchestrator | + protocol = "tcp" 2026-02-02 00:02:34.272579 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.272584 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 00:02:34.272590 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 00:02:34.272595 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-02 00:02:34.272600 | orchestrator | + security_group_id = (known after apply) 2026-02-02 00:02:34.272606 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.272611 | orchestrator | } 2026-02-02 00:02:34.272617 | orchestrator | 2026-02-02 00:02:34.272622 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-02 00:02:34.272628 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-02 00:02:34.272633 | orchestrator | + direction = "ingress" 2026-02-02 00:02:34.272639 | orchestrator | + ethertype = "IPv4" 2026-02-02 00:02:34.272644 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.272649 | orchestrator | + protocol = "udp" 2026-02-02 00:02:34.272655 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.272660 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 00:02:34.272671 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 00:02:34.272676 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-02 00:02:34.272682 | orchestrator | + security_group_id = (known after apply) 2026-02-02 00:02:34.272687 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.272693 | orchestrator | } 2026-02-02 00:02:34.272698 | orchestrator | 2026-02-02 00:02:34.272703 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-02 00:02:34.272713 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-02 00:02:34.272719 | orchestrator | + direction = "ingress" 2026-02-02 00:02:34.272724 | orchestrator | + ethertype = "IPv4" 2026-02-02 00:02:34.272730 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.272735 | orchestrator | + protocol = "icmp" 2026-02-02 00:02:34.272740 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.272746 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 00:02:34.272751 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 00:02:34.272757 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-02 00:02:34.272762 | orchestrator | + security_group_id = (known after apply) 2026-02-02 00:02:34.272768 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.272773 | orchestrator | } 2026-02-02 00:02:34.272778 | orchestrator | 2026-02-02 00:02:34.272784 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-02 00:02:34.272789 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-02 00:02:34.272795 | orchestrator | + direction = "ingress" 2026-02-02 00:02:34.272800 | orchestrator | + ethertype = "IPv4" 2026-02-02 00:02:34.272806 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.272811 | orchestrator | + protocol = "tcp" 2026-02-02 00:02:34.272816 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.272822 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 00:02:34.272831 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 00:02:34.272837 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-02 00:02:34.272842 | orchestrator | + security_group_id = (known after apply) 2026-02-02 00:02:34.272848 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.272853 | orchestrator | } 2026-02-02 00:02:34.272858 | orchestrator | 2026-02-02 00:02:34.272864 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-02 00:02:34.272869 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-02 00:02:34.272875 | orchestrator | + direction = "ingress" 2026-02-02 00:02:34.272880 | orchestrator | + ethertype = "IPv4" 2026-02-02 00:02:34.272886 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.272891 | orchestrator | + protocol = "udp" 2026-02-02 00:02:34.272896 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.272902 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 00:02:34.272907 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 00:02:34.272913 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-02 00:02:34.272918 | orchestrator | + security_group_id = (known after apply) 2026-02-02 00:02:34.272924 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.272929 | orchestrator | } 2026-02-02 00:02:34.272935 | orchestrator | 2026-02-02 00:02:34.272940 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-02 00:02:34.272946 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-02 00:02:34.272951 | orchestrator | + direction = "ingress" 2026-02-02 00:02:34.272960 | orchestrator | + ethertype = "IPv4" 2026-02-02 00:02:34.272965 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.272971 | orchestrator | + protocol = "icmp" 2026-02-02 00:02:34.272976 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.272982 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 00:02:34.272987 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 00:02:34.272992 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-02 00:02:34.272998 | orchestrator | + security_group_id = (known after apply) 2026-02-02 00:02:34.273003 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.273012 | orchestrator | } 2026-02-02 00:02:34.273017 | orchestrator | 2026-02-02 00:02:34.273023 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-02 00:02:34.273028 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-02 00:02:34.273034 | orchestrator | + description = "vrrp" 2026-02-02 00:02:34.273039 | orchestrator | + direction = "ingress" 2026-02-02 00:02:34.273044 | orchestrator | + ethertype = "IPv4" 2026-02-02 00:02:34.273050 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.273055 | orchestrator | + protocol = "112" 2026-02-02 00:02:34.273061 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.273066 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 00:02:34.273071 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 00:02:34.273077 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-02 00:02:34.273082 | orchestrator | + security_group_id = (known after apply) 2026-02-02 00:02:34.273088 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.273093 | orchestrator | } 2026-02-02 00:02:34.273099 | orchestrator | 2026-02-02 00:02:34.273104 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-02 00:02:34.273110 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-02 00:02:34.273115 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.273121 | orchestrator | + description = "management security group" 2026-02-02 00:02:34.273126 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.273131 | orchestrator | + name = "testbed-management" 2026-02-02 00:02:34.273137 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.273142 | orchestrator | + stateful = (known after apply) 2026-02-02 00:02:34.273148 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.273153 | orchestrator | } 2026-02-02 00:02:34.273158 | orchestrator | 2026-02-02 00:02:34.273168 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-02 00:02:34.273173 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-02 00:02:34.273179 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.273184 | orchestrator | + description = "node security group" 2026-02-02 00:02:34.273189 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.273195 | orchestrator | + name = "testbed-node" 2026-02-02 00:02:34.273200 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.273206 | orchestrator | + stateful = (known after apply) 2026-02-02 00:02:34.273211 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.273217 | orchestrator | } 2026-02-02 00:02:34.273222 | orchestrator | 2026-02-02 00:02:34.273227 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-02 00:02:34.273233 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-02 00:02:34.273238 | orchestrator | + all_tags = (known after apply) 2026-02-02 00:02:34.273244 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-02 00:02:34.273249 | orchestrator | + dns_nameservers = [ 2026-02-02 00:02:34.273255 | orchestrator | + "8.8.8.8", 2026-02-02 00:02:34.273260 | orchestrator | + "9.9.9.9", 2026-02-02 00:02:34.273266 | orchestrator | ] 2026-02-02 00:02:34.273271 | orchestrator | + enable_dhcp = true 2026-02-02 00:02:34.273277 | orchestrator | + gateway_ip = (known after apply) 2026-02-02 00:02:34.273282 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.273288 | orchestrator | + ip_version = 4 2026-02-02 00:02:34.273293 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-02 00:02:34.273299 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-02 00:02:34.273304 | orchestrator | + name = "subnet-testbed-management" 2026-02-02 00:02:34.273309 | orchestrator | + network_id = (known after apply) 2026-02-02 00:02:34.273315 | orchestrator | + no_gateway = false 2026-02-02 00:02:34.273320 | orchestrator | + region = (known after apply) 2026-02-02 00:02:34.273326 | orchestrator | + service_types = (known after apply) 2026-02-02 00:02:34.273335 | orchestrator | + tenant_id = (known after apply) 2026-02-02 00:02:34.273340 | orchestrator | 2026-02-02 00:02:34.273345 | orchestrator | + allocation_pool { 2026-02-02 00:02:34.273351 | orchestrator | + end = "192.168.31.250" 2026-02-02 00:02:34.273356 | orchestrator | + start = "192.168.31.200" 2026-02-02 00:02:34.273362 | orchestrator | } 2026-02-02 00:02:34.273367 | orchestrator | } 2026-02-02 00:02:34.273373 | orchestrator | 2026-02-02 00:02:34.273378 | orchestrator | # terraform_data.image will be created 2026-02-02 00:02:34.273384 | orchestrator | + resource "terraform_data" "image" { 2026-02-02 00:02:34.273389 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.273394 | orchestrator | + input = "Ubuntu 24.04" 2026-02-02 00:02:34.273400 | orchestrator | + output = (known after apply) 2026-02-02 00:02:34.273405 | orchestrator | } 2026-02-02 00:02:34.273411 | orchestrator | 2026-02-02 00:02:34.273416 | orchestrator | # terraform_data.image_node will be created 2026-02-02 00:02:34.273421 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-02 00:02:34.273427 | orchestrator | + id = (known after apply) 2026-02-02 00:02:34.273444 | orchestrator | + input = "Ubuntu 24.04" 2026-02-02 00:02:34.273451 | orchestrator | + output = (known after apply) 2026-02-02 00:02:34.273457 | orchestrator | } 2026-02-02 00:02:34.273462 | orchestrator | 2026-02-02 00:02:34.273468 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-02 00:02:34.273474 | orchestrator | 2026-02-02 00:02:34.273479 | orchestrator | Changes to Outputs: 2026-02-02 00:02:34.273485 | orchestrator | + manager_address = (sensitive value) 2026-02-02 00:02:34.273490 | orchestrator | + private_key = (sensitive value) 2026-02-02 00:02:34.560322 | orchestrator | terraform_data.image_node: Creating... 2026-02-02 00:02:34.561071 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=e6d889fb-044d-eee4-4aea-8591b5b18936] 2026-02-02 00:02:34.561421 | orchestrator | terraform_data.image: Creating... 2026-02-02 00:02:34.561768 | orchestrator | terraform_data.image: Creation complete after 0s [id=50c92a0d-f2cb-c306-6138-acf0d3c84b9d] 2026-02-02 00:02:34.576820 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-02 00:02:34.577087 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-02 00:02:34.585220 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-02 00:02:34.585299 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-02 00:02:34.587098 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-02 00:02:34.594579 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-02 00:02:34.594846 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-02 00:02:34.595071 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-02 00:02:34.595772 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-02 00:02:34.615216 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-02 00:02:35.056348 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-02 00:02:35.060266 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-02 00:02:35.068752 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-02 00:02:35.074111 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-02 00:02:35.126517 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-02-02 00:02:35.137661 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-02 00:02:35.996147 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=c72dc4ce-737a-4f33-bbc2-f861269b416a] 2026-02-02 00:02:36.007361 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-02 00:02:38.278575 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=b0ea612a-524a-49e0-9350-b51de64b4b0f] 2026-02-02 00:02:38.283467 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-02 00:02:38.306762 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=2465f817-ac52-4990-8055-49becb307e2f] 2026-02-02 00:02:38.313777 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-02 00:02:38.342108 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=78cf2400-96ef-4814-8ef8-9c5b7903f7b2] 2026-02-02 00:02:38.349008 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-02 00:02:38.418489 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=16c98517-e0bb-4d3e-881d-5a0c6479c324] 2026-02-02 00:02:38.424064 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-02 00:02:38.532677 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=b6ca68c3-b66c-4649-954f-01a9ba336075] 2026-02-02 00:02:38.539721 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-02 00:02:38.648759 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=09bfd8bd-6f2f-4d2c-8da9-081114b71f81] 2026-02-02 00:02:38.657424 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-02 00:02:38.678066 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=08cb0a0c-22c5-4be4-bae6-cc486e6cf4a8] 2026-02-02 00:02:38.694618 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=8be21885-29f7-4026-87ce-cd032f624f70] 2026-02-02 00:02:38.698405 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-02 00:02:38.709027 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=0ac81ad32884986177ec97d61fd6adf4696dca9a] 2026-02-02 00:02:38.715689 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-02 00:02:38.718294 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-02 00:02:38.724413 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=ddbcfc469f859143becbd60bc52aed6eb8a2edf4] 2026-02-02 00:02:38.724541 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=2791ba27-caf1-4c6d-bd50-3e0320bbaa42] 2026-02-02 00:02:39.385335 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=1f095ad0-f8d9-4c0e-a607-577effe431db] 2026-02-02 00:02:39.728508 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=2afa1d10-4089-4128-b41b-3f81acd45110] 2026-02-02 00:02:39.738321 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-02 00:02:41.759763 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=e82d7007-471e-45f2-a897-21e3387dc851] 2026-02-02 00:02:41.770801 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=4323fcbf-cb44-453a-b59c-b231361fa0b7] 2026-02-02 00:02:41.823931 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=ed850c6f-7155-455b-802e-f8313bdcc2ad] 2026-02-02 00:02:41.938941 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=923eb351-799f-412b-88c1-7c0ba22434bf] 2026-02-02 00:02:42.134930 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=60b6cadb-8996-428d-836c-f59bd1d57e57] 2026-02-02 00:02:42.391105 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c] 2026-02-02 00:02:45.055529 | orchestrator | openstack_networking_router_v2.router: Creation complete after 5s [id=ef48c40e-0854-4b0c-8784-679b8694f6f2] 2026-02-02 00:02:45.205277 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-02 00:02:45.205366 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-02 00:02:45.205385 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-02 00:02:45.300296 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=72baeea9-7033-411d-a610-5c63a0b15a20] 2026-02-02 00:02:45.312730 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-02 00:02:45.315866 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-02 00:02:45.316872 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-02 00:02:45.317864 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-02 00:02:45.318822 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-02 00:02:45.320295 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-02 00:02:45.486822 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=9a55598b-0dc3-4bf2-aac3-1e39c62923ec] 2026-02-02 00:02:45.492734 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-02 00:02:45.496817 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-02 00:02:45.500431 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-02 00:02:45.890008 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=1ed9832c-2cbe-41f3-a4a1-53fb2682d1fb] 2026-02-02 00:02:45.904649 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-02 00:02:46.270802 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=85cecd28-bad0-4700-b2ba-bd56756df6fa] 2026-02-02 00:02:46.281274 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-02 00:02:46.526921 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=f00c7309-53fa-4c3d-939b-a26f8113e257] 2026-02-02 00:02:46.539762 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-02 00:02:46.709869 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=533a50b4-322a-4c18-9d53-3c7f2c725c3e] 2026-02-02 00:02:46.719692 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-02 00:02:46.797961 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=feb2fc7f-d388-4b39-a34d-62df955058d2] 2026-02-02 00:02:46.803041 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-02 00:02:47.240649 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=45b2d860-437a-4ea4-bc2c-93a4fd2d6e87] 2026-02-02 00:02:47.246002 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-02 00:02:47.368435 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=174f4ef9-675c-42b9-8c40-80fdba783ff1] 2026-02-02 00:02:47.375898 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-02 00:02:47.453705 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=da8b09fb-e0e9-476d-9a52-518019a88216] 2026-02-02 00:02:47.487758 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=109fcbae-e068-4ec0-9a57-0bcd669b6061] 2026-02-02 00:02:47.493079 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=59e79206-364d-424c-9ea0-4bcf7e856afe] 2026-02-02 00:02:47.549013 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 3s [id=c9bb48ae-5a0e-419c-8a4e-d4eb6dbe4836] 2026-02-02 00:02:47.783185 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=ed72477b-afc9-4737-88f3-f7a309d5e51b] 2026-02-02 00:02:47.792881 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 3s [id=fa2ac232-0a4b-4aa1-9924-eead5c4ddc1a] 2026-02-02 00:02:48.038481 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=1c36dac3-8842-4023-8a16-af15dde8feab] 2026-02-02 00:02:48.187381 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 3s [id=c43db118-ad6e-462e-b55a-b5ab54b3c3a5] 2026-02-02 00:02:48.385663 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=8218151a-eb55-4c24-bb8d-484d472452f3] 2026-02-02 00:02:49.776579 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=1e7e984e-25bb-49d4-9ff3-78f6973984c2] 2026-02-02 00:02:49.795575 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-02 00:02:49.807816 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-02 00:02:49.808974 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-02 00:02:49.809099 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-02 00:02:49.824952 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-02 00:02:49.825132 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-02 00:02:49.828078 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-02 00:02:53.002752 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=00afdd3e-b300-4e6e-a1fd-154a52422e9d] 2026-02-02 00:02:53.014217 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-02 00:02:53.026769 | orchestrator | local_file.inventory: Creating... 2026-02-02 00:02:53.029883 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-02 00:02:53.032883 | orchestrator | local_file.inventory: Creation complete after 0s [id=b27f07ea01d3d4ff4ff7fb01c6646cb16e9c92ad] 2026-02-02 00:02:53.035981 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=fcc7dfd144b6f9fee077d6525a260b194a85fbf2] 2026-02-02 00:02:54.337395 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=00afdd3e-b300-4e6e-a1fd-154a52422e9d] 2026-02-02 00:02:59.812593 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-02 00:02:59.812708 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-02 00:02:59.812919 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-02 00:02:59.828177 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-02 00:02:59.828267 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-02 00:02:59.829248 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-02 00:03:09.819297 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-02 00:03:09.819431 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-02 00:03:09.819493 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-02 00:03:09.828908 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-02 00:03:09.829033 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-02 00:03:09.830117 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-02 00:03:19.822597 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-02-02 00:03:19.822719 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-02-02 00:03:19.822748 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-02 00:03:19.830091 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-02 00:03:19.830159 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-02 00:03:19.830171 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-02 00:03:20.624566 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=b7e9f8d4-da6d-4550-9230-862d9566270f] 2026-02-02 00:03:21.594116 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 32s [id=d9e35ffa-44ef-4550-ac52-5031c0a94802] 2026-02-02 00:03:29.831013 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-02-02 00:03:29.831131 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-02-02 00:03:29.831148 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-02-02 00:03:29.831194 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-02-02 00:03:30.772770 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=66b8d5b0-e34c-49b0-a646-a522c96f1e7d] 2026-02-02 00:03:31.083844 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=08dbf043-10ee-42a6-8501-f2bf53bff872] 2026-02-02 00:03:39.839581 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-02-02 00:03:39.839699 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-02-02 00:03:40.915966 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 51s [id=c6c96306-bd8b-4846-b05e-7eb6759736c4] 2026-02-02 00:03:49.840115 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m0s elapsed] 2026-02-02 00:03:51.005055 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 1m1s [id=1e4b892f-a1a3-4b48-9169-3be165100368] 2026-02-02 00:03:51.034779 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-02 00:03:51.044989 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=2817497602899120068] 2026-02-02 00:03:51.053400 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-02 00:03:51.053657 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-02 00:03:51.053926 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-02 00:03:51.054099 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-02 00:03:51.054972 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-02 00:03:51.071862 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-02 00:03:51.105820 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-02 00:03:51.122877 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-02 00:03:51.127006 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-02 00:03:51.135703 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-02 00:03:54.591510 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=b7e9f8d4-da6d-4550-9230-862d9566270f/b6ca68c3-b66c-4649-954f-01a9ba336075] 2026-02-02 00:03:54.623935 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=b7e9f8d4-da6d-4550-9230-862d9566270f/16c98517-e0bb-4d3e-881d-5a0c6479c324] 2026-02-02 00:03:54.663791 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=1e4b892f-a1a3-4b48-9169-3be165100368/b0ea612a-524a-49e0-9350-b51de64b4b0f] 2026-02-02 00:03:54.680685 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=66b8d5b0-e34c-49b0-a646-a522c96f1e7d/2465f817-ac52-4990-8055-49becb307e2f] 2026-02-02 00:03:54.841884 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=66b8d5b0-e34c-49b0-a646-a522c96f1e7d/78cf2400-96ef-4814-8ef8-9c5b7903f7b2] 2026-02-02 00:03:54.928144 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=1e4b892f-a1a3-4b48-9169-3be165100368/2791ba27-caf1-4c6d-bd50-3e0320bbaa42] 2026-02-02 00:03:56.394225 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=66b8d5b0-e34c-49b0-a646-a522c96f1e7d/8be21885-29f7-4026-87ce-cd032f624f70] 2026-02-02 00:04:00.903315 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=b7e9f8d4-da6d-4550-9230-862d9566270f/09bfd8bd-6f2f-4d2c-8da9-081114b71f81] 2026-02-02 00:04:01.086270 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=1e4b892f-a1a3-4b48-9169-3be165100368/08cb0a0c-22c5-4be4-bae6-cc486e6cf4a8] 2026-02-02 00:04:01.113244 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-02 00:04:11.122244 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-02 00:04:11.640499 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=6bcbe71f-c763-4521-9480-edadc1899002] 2026-02-02 00:04:11.665012 | orchestrator | 2026-02-02 00:04:11.665121 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-02 00:04:11.665130 | orchestrator | 2026-02-02 00:04:11.665135 | orchestrator | Outputs: 2026-02-02 00:04:11.665139 | orchestrator | 2026-02-02 00:04:11.665150 | orchestrator | manager_address = 2026-02-02 00:04:11.665155 | orchestrator | private_key = 2026-02-02 00:04:11.777474 | orchestrator | ok: Runtime: 0:01:45.025909 2026-02-02 00:04:11.807929 | 2026-02-02 00:04:11.808051 | TASK [Create infrastructure (stable)] 2026-02-02 00:04:12.344612 | orchestrator | skipping: Conditional result was False 2026-02-02 00:04:12.362277 | 2026-02-02 00:04:12.362472 | TASK [Fetch manager address] 2026-02-02 00:04:12.819210 | orchestrator | ok 2026-02-02 00:04:12.829071 | 2026-02-02 00:04:12.829202 | TASK [Set manager_host address] 2026-02-02 00:04:12.910460 | orchestrator | ok 2026-02-02 00:04:12.921108 | 2026-02-02 00:04:12.921240 | LOOP [Update ansible collections] 2026-02-02 00:04:24.555084 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-02 00:04:24.555501 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-02 00:04:24.555566 | orchestrator | Starting galaxy collection install process 2026-02-02 00:04:24.555608 | orchestrator | Process install dependency map 2026-02-02 00:04:24.555647 | orchestrator | Starting collection install process 2026-02-02 00:04:24.555683 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2026-02-02 00:04:24.555726 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2026-02-02 00:04:24.555832 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-02 00:04:24.555927 | orchestrator | ok: Item: commons Runtime: 0:00:11.256840 2026-02-02 00:04:29.017311 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-02 00:04:29.017453 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-02 00:04:29.017585 | orchestrator | Starting galaxy collection install process 2026-02-02 00:04:29.017621 | orchestrator | Process install dependency map 2026-02-02 00:04:29.017643 | orchestrator | Starting collection install process 2026-02-02 00:04:29.017664 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2026-02-02 00:04:29.017684 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2026-02-02 00:04:29.017703 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-02 00:04:29.017754 | orchestrator | ok: Item: services Runtime: 0:00:03.953282 2026-02-02 00:04:29.045132 | 2026-02-02 00:04:29.045275 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-02 00:04:39.634660 | orchestrator | ok 2026-02-02 00:04:39.645812 | 2026-02-02 00:04:39.645927 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-02 00:05:39.693072 | orchestrator | ok 2026-02-02 00:05:39.703409 | 2026-02-02 00:05:39.703538 | TASK [Fetch manager ssh hostkey] 2026-02-02 00:05:41.284536 | orchestrator | Output suppressed because no_log was given 2026-02-02 00:05:41.298458 | 2026-02-02 00:05:41.298613 | TASK [Get ssh keypair from terraform environment] 2026-02-02 00:05:41.838384 | orchestrator | ok: Runtime: 0:00:00.011118 2026-02-02 00:05:41.855140 | 2026-02-02 00:05:41.855567 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-02 00:05:41.906556 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-02 00:05:41.917218 | 2026-02-02 00:05:41.917353 | TASK [Run manager part 0] 2026-02-02 00:05:43.679478 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-02 00:05:43.838119 | orchestrator | 2026-02-02 00:05:43.838183 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-02 00:05:43.838195 | orchestrator | 2026-02-02 00:05:43.838211 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-02 00:05:45.746830 | orchestrator | ok: [testbed-manager] 2026-02-02 00:05:45.746881 | orchestrator | 2026-02-02 00:05:45.746902 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-02 00:05:45.746912 | orchestrator | 2026-02-02 00:05:45.746988 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 00:05:47.931069 | orchestrator | ok: [testbed-manager] 2026-02-02 00:05:47.931149 | orchestrator | 2026-02-02 00:05:47.931165 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-02 00:05:48.639301 | orchestrator | ok: [testbed-manager] 2026-02-02 00:05:48.639436 | orchestrator | 2026-02-02 00:05:48.639450 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-02 00:05:48.688585 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:05:48.688824 | orchestrator | 2026-02-02 00:05:48.688847 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-02 00:05:48.713757 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:05:48.713820 | orchestrator | 2026-02-02 00:05:48.713832 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-02 00:05:48.750413 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:05:48.750521 | orchestrator | 2026-02-02 00:05:48.750534 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-02 00:05:48.793403 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:05:48.793458 | orchestrator | 2026-02-02 00:05:48.793464 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-02 00:05:48.821050 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:05:48.821394 | orchestrator | 2026-02-02 00:05:48.821589 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-02 00:05:48.852961 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:05:48.853037 | orchestrator | 2026-02-02 00:05:48.853056 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-02 00:05:48.894134 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:05:48.894183 | orchestrator | 2026-02-02 00:05:48.894190 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-02 00:05:49.661253 | orchestrator | changed: [testbed-manager] 2026-02-02 00:05:49.661323 | orchestrator | 2026-02-02 00:05:49.661334 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-02 00:08:46.754616 | orchestrator | changed: [testbed-manager] 2026-02-02 00:08:46.754691 | orchestrator | 2026-02-02 00:08:46.754709 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-02 00:10:13.089681 | orchestrator | changed: [testbed-manager] 2026-02-02 00:10:13.089774 | orchestrator | 2026-02-02 00:10:13.089791 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-02 00:10:41.608958 | orchestrator | changed: [testbed-manager] 2026-02-02 00:10:41.609033 | orchestrator | 2026-02-02 00:10:41.609050 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-02 00:10:52.735300 | orchestrator | changed: [testbed-manager] 2026-02-02 00:10:52.735405 | orchestrator | 2026-02-02 00:10:52.735424 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-02 00:10:52.790571 | orchestrator | ok: [testbed-manager] 2026-02-02 00:10:52.790691 | orchestrator | 2026-02-02 00:10:52.790710 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-02 00:10:54.238004 | orchestrator | ok: [testbed-manager] 2026-02-02 00:10:54.238058 | orchestrator | 2026-02-02 00:10:54.238066 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-02 00:10:54.995139 | orchestrator | changed: [testbed-manager] 2026-02-02 00:10:55.383827 | orchestrator | 2026-02-02 00:10:55.383870 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-02 00:11:03.568545 | orchestrator | changed: [testbed-manager] 2026-02-02 00:11:03.568590 | orchestrator | 2026-02-02 00:11:03.568616 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-02 00:11:10.559205 | orchestrator | changed: [testbed-manager] 2026-02-02 00:11:10.559312 | orchestrator | 2026-02-02 00:11:10.559332 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-02 00:11:13.581584 | orchestrator | changed: [testbed-manager] 2026-02-02 00:11:13.581708 | orchestrator | 2026-02-02 00:11:13.581727 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-02 00:11:15.450910 | orchestrator | changed: [testbed-manager] 2026-02-02 00:11:15.451011 | orchestrator | 2026-02-02 00:11:15.451027 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-02 00:11:16.589097 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-02 00:11:16.589195 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-02 00:11:16.589211 | orchestrator | 2026-02-02 00:11:16.589224 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-02 00:11:16.630029 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-02 00:11:16.630096 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-02 00:11:16.630106 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-02 00:11:16.630113 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-02 00:11:34.952012 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-02 00:11:34.952084 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-02 00:11:34.952094 | orchestrator | 2026-02-02 00:11:34.952101 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-02 00:11:35.512942 | orchestrator | changed: [testbed-manager] 2026-02-02 00:11:35.513662 | orchestrator | 2026-02-02 00:11:35.513689 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-02 00:12:54.422719 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-02 00:12:54.422761 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-02 00:12:54.422768 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-02 00:12:54.422773 | orchestrator | 2026-02-02 00:12:54.422778 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-02 00:12:56.849351 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-02 00:12:56.849463 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-02 00:12:56.849480 | orchestrator | 2026-02-02 00:12:56.849493 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-02 00:12:56.849505 | orchestrator | 2026-02-02 00:12:56.849516 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 00:12:58.249739 | orchestrator | ok: [testbed-manager] 2026-02-02 00:12:58.249827 | orchestrator | 2026-02-02 00:12:58.249844 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-02 00:12:58.297091 | orchestrator | ok: [testbed-manager] 2026-02-02 00:12:58.297208 | orchestrator | 2026-02-02 00:12:58.297235 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-02 00:12:58.362420 | orchestrator | ok: [testbed-manager] 2026-02-02 00:12:58.362556 | orchestrator | 2026-02-02 00:12:58.362576 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-02 00:12:59.209227 | orchestrator | changed: [testbed-manager] 2026-02-02 00:12:59.209317 | orchestrator | 2026-02-02 00:12:59.209335 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-02 00:12:59.965592 | orchestrator | changed: [testbed-manager] 2026-02-02 00:12:59.965639 | orchestrator | 2026-02-02 00:12:59.965647 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-02 00:13:01.463828 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-02 00:13:01.463871 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-02 00:13:01.463879 | orchestrator | 2026-02-02 00:13:01.463894 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-02 00:13:02.881727 | orchestrator | changed: [testbed-manager] 2026-02-02 00:13:02.881844 | orchestrator | 2026-02-02 00:13:02.881858 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-02 00:13:04.741795 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-02 00:13:04.741971 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-02 00:13:04.741987 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-02 00:13:04.741998 | orchestrator | 2026-02-02 00:13:04.742009 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-02 00:13:04.798501 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:13:04.798570 | orchestrator | 2026-02-02 00:13:04.798580 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-02 00:13:04.866143 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:13:04.866203 | orchestrator | 2026-02-02 00:13:04.866214 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-02 00:13:05.454812 | orchestrator | changed: [testbed-manager] 2026-02-02 00:13:05.454847 | orchestrator | 2026-02-02 00:13:05.454853 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-02 00:13:05.531263 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:13:05.531300 | orchestrator | 2026-02-02 00:13:05.531307 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-02 00:13:06.443616 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-02 00:13:06.443663 | orchestrator | changed: [testbed-manager] 2026-02-02 00:13:06.443673 | orchestrator | 2026-02-02 00:13:06.443682 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-02 00:13:06.478341 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:13:06.478376 | orchestrator | 2026-02-02 00:13:06.478381 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-02 00:13:06.513592 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:13:06.513631 | orchestrator | 2026-02-02 00:13:06.513639 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-02 00:13:06.549806 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:13:06.549847 | orchestrator | 2026-02-02 00:13:06.549857 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-02 00:13:06.629881 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:13:06.629925 | orchestrator | 2026-02-02 00:13:06.629934 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-02 00:13:07.387023 | orchestrator | ok: [testbed-manager] 2026-02-02 00:13:07.387081 | orchestrator | 2026-02-02 00:13:07.387112 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-02 00:13:07.387124 | orchestrator | 2026-02-02 00:13:07.387135 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 00:13:08.858367 | orchestrator | ok: [testbed-manager] 2026-02-02 00:13:08.858603 | orchestrator | 2026-02-02 00:13:08.858637 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-02 00:13:09.843683 | orchestrator | changed: [testbed-manager] 2026-02-02 00:13:09.843758 | orchestrator | 2026-02-02 00:13:09.843774 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:13:09.843787 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-02 00:13:09.843799 | orchestrator | 2026-02-02 00:13:10.467489 | orchestrator | ok: Runtime: 0:07:27.564439 2026-02-02 00:13:10.486630 | 2026-02-02 00:13:10.486888 | TASK [Point out that the log in on the manager is now possible] 2026-02-02 00:13:10.526526 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-02 00:13:10.537206 | 2026-02-02 00:13:10.537340 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-02 00:13:10.568298 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-02 00:13:10.576636 | 2026-02-02 00:13:10.576739 | TASK [Run manager part 1 + 2] 2026-02-02 00:13:13.561860 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-02 00:13:13.628457 | orchestrator | 2026-02-02 00:13:13.628517 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-02 00:13:13.628530 | orchestrator | 2026-02-02 00:13:13.628553 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 00:13:16.669434 | orchestrator | ok: [testbed-manager] 2026-02-02 00:13:16.669532 | orchestrator | 2026-02-02 00:13:16.669580 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-02 00:13:16.721723 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:13:16.721792 | orchestrator | 2026-02-02 00:13:16.721813 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-02 00:13:16.763670 | orchestrator | ok: [testbed-manager] 2026-02-02 00:13:16.763720 | orchestrator | 2026-02-02 00:13:16.763732 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-02 00:13:16.809264 | orchestrator | ok: [testbed-manager] 2026-02-02 00:13:16.809340 | orchestrator | 2026-02-02 00:13:16.809356 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-02 00:13:16.882733 | orchestrator | ok: [testbed-manager] 2026-02-02 00:13:16.882775 | orchestrator | 2026-02-02 00:13:16.882783 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-02 00:13:16.954528 | orchestrator | ok: [testbed-manager] 2026-02-02 00:13:16.954569 | orchestrator | 2026-02-02 00:13:16.954577 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-02 00:13:16.996873 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-02 00:13:16.996903 | orchestrator | 2026-02-02 00:13:16.996909 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-02 00:13:17.760838 | orchestrator | ok: [testbed-manager] 2026-02-02 00:13:17.760874 | orchestrator | 2026-02-02 00:13:17.760882 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-02 00:13:17.815156 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:13:17.815194 | orchestrator | 2026-02-02 00:13:17.815279 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-02 00:13:19.252238 | orchestrator | changed: [testbed-manager] 2026-02-02 00:13:19.252314 | orchestrator | 2026-02-02 00:13:19.252329 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-02 00:13:19.816715 | orchestrator | ok: [testbed-manager] 2026-02-02 00:13:19.816755 | orchestrator | 2026-02-02 00:13:19.816764 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-02 00:13:20.920294 | orchestrator | changed: [testbed-manager] 2026-02-02 00:13:20.920333 | orchestrator | 2026-02-02 00:13:20.920343 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-02 00:13:36.777343 | orchestrator | changed: [testbed-manager] 2026-02-02 00:13:36.777449 | orchestrator | 2026-02-02 00:13:36.777465 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-02 00:13:37.505771 | orchestrator | ok: [testbed-manager] 2026-02-02 00:13:37.505862 | orchestrator | 2026-02-02 00:13:37.505880 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-02 00:13:37.559479 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:13:37.559530 | orchestrator | 2026-02-02 00:13:37.559536 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-02 00:13:38.566198 | orchestrator | changed: [testbed-manager] 2026-02-02 00:13:38.566362 | orchestrator | 2026-02-02 00:13:38.566395 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-02 00:13:39.549525 | orchestrator | changed: [testbed-manager] 2026-02-02 00:13:39.549577 | orchestrator | 2026-02-02 00:13:39.549586 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-02 00:13:40.115799 | orchestrator | changed: [testbed-manager] 2026-02-02 00:13:40.115863 | orchestrator | 2026-02-02 00:13:40.115879 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-02 00:13:40.152860 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-02 00:13:40.152989 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-02 00:13:40.153017 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-02 00:13:40.153037 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-02 00:13:45.286661 | orchestrator | changed: [testbed-manager] 2026-02-02 00:13:45.286722 | orchestrator | 2026-02-02 00:13:45.286734 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-02 00:13:54.692407 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-02 00:13:54.692450 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-02 00:13:54.692458 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-02 00:13:54.692463 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-02 00:13:54.692472 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-02 00:13:54.692477 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-02 00:13:54.692482 | orchestrator | 2026-02-02 00:13:54.692488 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-02 00:13:56.097491 | orchestrator | changed: [testbed-manager] 2026-02-02 00:13:56.097545 | orchestrator | 2026-02-02 00:13:56.097558 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-02 00:13:56.127881 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:13:56.127939 | orchestrator | 2026-02-02 00:13:56.127953 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-02 00:13:59.060992 | orchestrator | changed: [testbed-manager] 2026-02-02 00:13:59.061029 | orchestrator | 2026-02-02 00:13:59.061035 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-02 00:13:59.104585 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:13:59.104624 | orchestrator | 2026-02-02 00:13:59.104632 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-02 00:15:38.432498 | orchestrator | changed: [testbed-manager] 2026-02-02 00:15:38.432565 | orchestrator | 2026-02-02 00:15:38.432577 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-02 00:15:39.636453 | orchestrator | ok: [testbed-manager] 2026-02-02 00:15:39.636532 | orchestrator | 2026-02-02 00:15:39.636546 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:15:39.636558 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-02 00:15:39.636569 | orchestrator | 2026-02-02 00:15:40.223373 | orchestrator | ok: Runtime: 0:02:28.794359 2026-02-02 00:15:40.241509 | 2026-02-02 00:15:40.241667 | TASK [Reboot manager] 2026-02-02 00:15:41.777913 | orchestrator | ok: Runtime: 0:00:01.006975 2026-02-02 00:15:41.795914 | 2026-02-02 00:15:41.796081 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-02 00:15:58.235104 | orchestrator | ok 2026-02-02 00:15:58.245903 | 2026-02-02 00:15:58.246032 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-02 00:16:58.283254 | orchestrator | ok 2026-02-02 00:16:58.292485 | 2026-02-02 00:16:58.292614 | TASK [Deploy manager + bootstrap nodes] 2026-02-02 00:17:00.994584 | orchestrator | 2026-02-02 00:17:00.994728 | orchestrator | # DEPLOY MANAGER 2026-02-02 00:17:00.994746 | orchestrator | 2026-02-02 00:17:00.994759 | orchestrator | + set -e 2026-02-02 00:17:00.994772 | orchestrator | + echo 2026-02-02 00:17:00.994786 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-02 00:17:00.994802 | orchestrator | + echo 2026-02-02 00:17:00.994873 | orchestrator | + cat /opt/manager-vars.sh 2026-02-02 00:17:00.997544 | orchestrator | export NUMBER_OF_NODES=6 2026-02-02 00:17:00.997615 | orchestrator | 2026-02-02 00:17:00.997630 | orchestrator | export CEPH_VERSION=reef 2026-02-02 00:17:00.997643 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-02 00:17:00.997655 | orchestrator | export MANAGER_VERSION=latest 2026-02-02 00:17:00.997682 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-02-02 00:17:00.997692 | orchestrator | 2026-02-02 00:17:00.997708 | orchestrator | export ARA=false 2026-02-02 00:17:00.997718 | orchestrator | export DEPLOY_MODE=manager 2026-02-02 00:17:00.997734 | orchestrator | export TEMPEST=true 2026-02-02 00:17:00.997744 | orchestrator | export IS_ZUUL=true 2026-02-02 00:17:00.997754 | orchestrator | 2026-02-02 00:17:00.997770 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.61 2026-02-02 00:17:00.997781 | orchestrator | export EXTERNAL_API=false 2026-02-02 00:17:00.997791 | orchestrator | 2026-02-02 00:17:00.997800 | orchestrator | export IMAGE_USER=ubuntu 2026-02-02 00:17:00.997813 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-02 00:17:00.997822 | orchestrator | 2026-02-02 00:17:00.997832 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-02 00:17:00.997850 | orchestrator | 2026-02-02 00:17:00.997860 | orchestrator | + echo 2026-02-02 00:17:00.997872 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 00:17:00.998585 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 00:17:00.998608 | orchestrator | ++ INTERACTIVE=false 2026-02-02 00:17:00.998619 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 00:17:00.998632 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 00:17:00.998758 | orchestrator | + source /opt/manager-vars.sh 2026-02-02 00:17:00.998775 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-02 00:17:00.998787 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-02 00:17:00.998798 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-02 00:17:00.998810 | orchestrator | ++ CEPH_VERSION=reef 2026-02-02 00:17:00.998822 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-02 00:17:00.998835 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-02 00:17:00.998846 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-02 00:17:00.998859 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-02 00:17:00.998871 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-02-02 00:17:00.998894 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-02-02 00:17:00.998906 | orchestrator | ++ export ARA=false 2026-02-02 00:17:00.998916 | orchestrator | ++ ARA=false 2026-02-02 00:17:00.998926 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-02 00:17:00.998935 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-02 00:17:00.998945 | orchestrator | ++ export TEMPEST=true 2026-02-02 00:17:00.998954 | orchestrator | ++ TEMPEST=true 2026-02-02 00:17:00.998964 | orchestrator | ++ export IS_ZUUL=true 2026-02-02 00:17:00.998973 | orchestrator | ++ IS_ZUUL=true 2026-02-02 00:17:00.998987 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.61 2026-02-02 00:17:00.998997 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.61 2026-02-02 00:17:00.999033 | orchestrator | ++ export EXTERNAL_API=false 2026-02-02 00:17:00.999045 | orchestrator | ++ EXTERNAL_API=false 2026-02-02 00:17:00.999054 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-02 00:17:00.999064 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-02 00:17:00.999073 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-02 00:17:00.999083 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-02 00:17:00.999093 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-02 00:17:00.999102 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-02 00:17:00.999112 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-02 00:17:01.051906 | orchestrator | + docker version 2026-02-02 00:17:01.329838 | orchestrator | Client: Docker Engine - Community 2026-02-02 00:17:01.329948 | orchestrator | Version: 27.5.1 2026-02-02 00:17:01.329972 | orchestrator | API version: 1.47 2026-02-02 00:17:01.329995 | orchestrator | Go version: go1.22.11 2026-02-02 00:17:01.330098 | orchestrator | Git commit: 9f9e405 2026-02-02 00:17:01.330120 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-02 00:17:01.330142 | orchestrator | OS/Arch: linux/amd64 2026-02-02 00:17:01.330160 | orchestrator | Context: default 2026-02-02 00:17:01.330180 | orchestrator | 2026-02-02 00:17:01.330201 | orchestrator | Server: Docker Engine - Community 2026-02-02 00:17:01.330220 | orchestrator | Engine: 2026-02-02 00:17:01.330241 | orchestrator | Version: 27.5.1 2026-02-02 00:17:01.330281 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-02 00:17:01.330338 | orchestrator | Go version: go1.22.11 2026-02-02 00:17:01.330357 | orchestrator | Git commit: 4c9b3b0 2026-02-02 00:17:01.330377 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-02 00:17:01.330395 | orchestrator | OS/Arch: linux/amd64 2026-02-02 00:17:01.330414 | orchestrator | Experimental: false 2026-02-02 00:17:01.330435 | orchestrator | containerd: 2026-02-02 00:17:01.330453 | orchestrator | Version: v2.2.1 2026-02-02 00:17:01.330471 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-02 00:17:01.330493 | orchestrator | runc: 2026-02-02 00:17:01.330515 | orchestrator | Version: 1.3.4 2026-02-02 00:17:01.330535 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-02 00:17:01.330554 | orchestrator | docker-init: 2026-02-02 00:17:01.330572 | orchestrator | Version: 0.19.0 2026-02-02 00:17:01.330592 | orchestrator | GitCommit: de40ad0 2026-02-02 00:17:01.332642 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-02 00:17:01.340941 | orchestrator | + set -e 2026-02-02 00:17:01.341037 | orchestrator | + source /opt/manager-vars.sh 2026-02-02 00:17:01.341055 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-02 00:17:01.341069 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-02 00:17:01.341079 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-02 00:17:01.341089 | orchestrator | ++ CEPH_VERSION=reef 2026-02-02 00:17:01.341099 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-02 00:17:01.341110 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-02 00:17:01.341120 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-02 00:17:01.341130 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-02 00:17:01.341140 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-02-02 00:17:01.341149 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-02-02 00:17:01.341159 | orchestrator | ++ export ARA=false 2026-02-02 00:17:01.341168 | orchestrator | ++ ARA=false 2026-02-02 00:17:01.341178 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-02 00:17:01.341188 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-02 00:17:01.341197 | orchestrator | ++ export TEMPEST=true 2026-02-02 00:17:01.341207 | orchestrator | ++ TEMPEST=true 2026-02-02 00:17:01.341216 | orchestrator | ++ export IS_ZUUL=true 2026-02-02 00:17:01.341225 | orchestrator | ++ IS_ZUUL=true 2026-02-02 00:17:01.341235 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.61 2026-02-02 00:17:01.341244 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.61 2026-02-02 00:17:01.341254 | orchestrator | ++ export EXTERNAL_API=false 2026-02-02 00:17:01.341263 | orchestrator | ++ EXTERNAL_API=false 2026-02-02 00:17:01.341273 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-02 00:17:01.341282 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-02 00:17:01.341292 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-02 00:17:01.341301 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-02 00:17:01.341311 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-02 00:17:01.341328 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-02 00:17:01.341339 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 00:17:01.341348 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 00:17:01.341357 | orchestrator | ++ INTERACTIVE=false 2026-02-02 00:17:01.341367 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 00:17:01.341381 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 00:17:01.341391 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-02 00:17:01.341400 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-02 00:17:01.341410 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-02-02 00:17:01.348622 | orchestrator | + set -e 2026-02-02 00:17:01.348693 | orchestrator | + VERSION=reef 2026-02-02 00:17:01.349269 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-02-02 00:17:01.355455 | orchestrator | + [[ -n ceph_version: reef ]] 2026-02-02 00:17:01.355552 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-02-02 00:17:01.360293 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-02-02 00:17:01.366102 | orchestrator | + set -e 2026-02-02 00:17:01.366184 | orchestrator | + VERSION=2025.1 2026-02-02 00:17:01.366210 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-02-02 00:17:01.370991 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-02-02 00:17:01.371076 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-02 00:17:01.379292 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-02 00:17:01.380189 | orchestrator | ++ semver latest 7.0.0 2026-02-02 00:17:01.430237 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-02 00:17:01.430400 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-02 00:17:01.430419 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-02 00:17:01.430668 | orchestrator | ++ semver latest 10.0.0-0 2026-02-02 00:17:01.471912 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-02 00:17:01.472029 | orchestrator | ++ semver 2025.1 2025.1 2026-02-02 00:17:01.528675 | orchestrator | + [[ 0 -ge 0 ]] 2026-02-02 00:17:01.528791 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-02 00:17:01.535441 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-02 00:17:01.540082 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-02 00:17:01.622298 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-02 00:17:01.623463 | orchestrator | + source /opt/venv/bin/activate 2026-02-02 00:17:01.624676 | orchestrator | ++ deactivate nondestructive 2026-02-02 00:17:01.624711 | orchestrator | ++ '[' -n '' ']' 2026-02-02 00:17:01.624724 | orchestrator | ++ '[' -n '' ']' 2026-02-02 00:17:01.624737 | orchestrator | ++ hash -r 2026-02-02 00:17:01.624751 | orchestrator | ++ '[' -n '' ']' 2026-02-02 00:17:01.624764 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-02 00:17:01.624782 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-02 00:17:01.624794 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-02 00:17:01.624808 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-02 00:17:01.624821 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-02 00:17:01.624834 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-02 00:17:01.624847 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-02 00:17:01.624861 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 00:17:01.624895 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 00:17:01.624908 | orchestrator | ++ export PATH 2026-02-02 00:17:01.624921 | orchestrator | ++ '[' -n '' ']' 2026-02-02 00:17:01.624934 | orchestrator | ++ '[' -z '' ']' 2026-02-02 00:17:01.624951 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-02 00:17:01.624962 | orchestrator | ++ PS1='(venv) ' 2026-02-02 00:17:01.624975 | orchestrator | ++ export PS1 2026-02-02 00:17:01.624988 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-02 00:17:01.625032 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-02 00:17:01.625044 | orchestrator | ++ hash -r 2026-02-02 00:17:01.625117 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-02 00:17:02.981642 | orchestrator | 2026-02-02 00:17:02.981764 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-02 00:17:02.981783 | orchestrator | 2026-02-02 00:17:02.981795 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-02 00:17:03.565240 | orchestrator | ok: [testbed-manager] 2026-02-02 00:17:03.565330 | orchestrator | 2026-02-02 00:17:03.565344 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-02 00:17:04.585228 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:04.585349 | orchestrator | 2026-02-02 00:17:04.585366 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-02 00:17:04.585379 | orchestrator | 2026-02-02 00:17:04.585391 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 00:17:07.067732 | orchestrator | ok: [testbed-manager] 2026-02-02 00:17:07.067818 | orchestrator | 2026-02-02 00:17:07.067827 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-02 00:17:07.118271 | orchestrator | ok: [testbed-manager] 2026-02-02 00:17:07.118359 | orchestrator | 2026-02-02 00:17:07.118371 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-02 00:17:07.589150 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:07.589219 | orchestrator | 2026-02-02 00:17:07.589227 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-02 00:17:07.634822 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:17:07.634933 | orchestrator | 2026-02-02 00:17:07.634951 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-02 00:17:08.005410 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:08.005545 | orchestrator | 2026-02-02 00:17:08.005562 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-02 00:17:08.347589 | orchestrator | ok: [testbed-manager] 2026-02-02 00:17:08.347686 | orchestrator | 2026-02-02 00:17:08.347702 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-02 00:17:08.494346 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:17:08.494446 | orchestrator | 2026-02-02 00:17:08.494462 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-02 00:17:08.494474 | orchestrator | 2026-02-02 00:17:08.494486 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 00:17:10.228193 | orchestrator | ok: [testbed-manager] 2026-02-02 00:17:10.228303 | orchestrator | 2026-02-02 00:17:10.228321 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-02 00:17:10.321418 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-02 00:17:10.321496 | orchestrator | 2026-02-02 00:17:10.321506 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-02 00:17:10.377769 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-02 00:17:10.377850 | orchestrator | 2026-02-02 00:17:10.377861 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-02 00:17:11.468116 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-02 00:17:11.468196 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-02 00:17:11.468206 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-02 00:17:11.468213 | orchestrator | 2026-02-02 00:17:11.468223 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-02 00:17:13.417177 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-02 00:17:13.417291 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-02 00:17:13.417306 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-02 00:17:13.417319 | orchestrator | 2026-02-02 00:17:13.417331 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-02 00:17:14.055319 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-02 00:17:14.055417 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:14.055433 | orchestrator | 2026-02-02 00:17:14.055446 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-02 00:17:14.720197 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-02 00:17:14.720303 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:14.720326 | orchestrator | 2026-02-02 00:17:14.720344 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-02 00:17:14.772626 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:17:14.772723 | orchestrator | 2026-02-02 00:17:14.772769 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-02 00:17:15.137047 | orchestrator | ok: [testbed-manager] 2026-02-02 00:17:15.137172 | orchestrator | 2026-02-02 00:17:15.137201 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-02 00:17:15.212349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-02 00:17:15.212452 | orchestrator | 2026-02-02 00:17:15.212494 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-02 00:17:16.312786 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:16.312849 | orchestrator | 2026-02-02 00:17:16.312855 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-02 00:17:17.121831 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:17.121901 | orchestrator | 2026-02-02 00:17:17.121911 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-02 00:17:30.628579 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:30.628662 | orchestrator | 2026-02-02 00:17:30.628673 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-02 00:17:30.673671 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:17:30.673764 | orchestrator | 2026-02-02 00:17:30.673779 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-02 00:17:30.673820 | orchestrator | 2026-02-02 00:17:30.673833 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 00:17:32.564638 | orchestrator | ok: [testbed-manager] 2026-02-02 00:17:32.564752 | orchestrator | 2026-02-02 00:17:32.564771 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-02 00:17:32.681044 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-02 00:17:32.681147 | orchestrator | 2026-02-02 00:17:32.681163 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-02 00:17:32.744352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 00:17:32.744443 | orchestrator | 2026-02-02 00:17:32.744457 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-02 00:17:35.537025 | orchestrator | ok: [testbed-manager] 2026-02-02 00:17:35.537080 | orchestrator | 2026-02-02 00:17:35.537087 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-02 00:17:35.579869 | orchestrator | ok: [testbed-manager] 2026-02-02 00:17:35.579922 | orchestrator | 2026-02-02 00:17:35.579927 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-02 00:17:35.708814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-02 00:17:35.708908 | orchestrator | 2026-02-02 00:17:35.708922 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-02 00:17:38.715078 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-02 00:17:38.715194 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-02 00:17:38.715218 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-02 00:17:38.715237 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-02 00:17:38.715248 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-02 00:17:38.715259 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-02 00:17:38.715271 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-02 00:17:38.715281 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-02 00:17:38.715293 | orchestrator | 2026-02-02 00:17:38.715305 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-02 00:17:39.363025 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:39.363124 | orchestrator | 2026-02-02 00:17:39.363139 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-02 00:17:40.017805 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:40.017876 | orchestrator | 2026-02-02 00:17:40.017883 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-02 00:17:40.095148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-02 00:17:40.095219 | orchestrator | 2026-02-02 00:17:40.095232 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-02 00:17:41.330826 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-02 00:17:41.330921 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-02 00:17:41.330932 | orchestrator | 2026-02-02 00:17:41.330940 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-02 00:17:41.962260 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:41.962342 | orchestrator | 2026-02-02 00:17:41.962351 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-02 00:17:42.016069 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:17:42.016137 | orchestrator | 2026-02-02 00:17:42.016143 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-02 00:17:42.096721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-02 00:17:42.096806 | orchestrator | 2026-02-02 00:17:42.096817 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-02 00:17:42.726792 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:42.726881 | orchestrator | 2026-02-02 00:17:42.726889 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-02 00:17:42.793528 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-02 00:17:42.793589 | orchestrator | 2026-02-02 00:17:42.793595 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-02 00:17:44.176587 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-02 00:17:44.176657 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-02 00:17:44.176662 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:44.176668 | orchestrator | 2026-02-02 00:17:44.176673 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-02 00:17:44.820961 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:44.821043 | orchestrator | 2026-02-02 00:17:44.821052 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-02 00:17:44.866935 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:17:44.867012 | orchestrator | 2026-02-02 00:17:44.867017 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-02 00:17:44.955747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-02 00:17:44.955850 | orchestrator | 2026-02-02 00:17:44.955866 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-02 00:17:45.505769 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:45.505841 | orchestrator | 2026-02-02 00:17:45.505850 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-02 00:17:45.914752 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:45.914829 | orchestrator | 2026-02-02 00:17:45.914837 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-02 00:17:47.207714 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-02 00:17:47.207808 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-02 00:17:47.207818 | orchestrator | 2026-02-02 00:17:47.207827 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-02 00:17:47.830122 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:47.830211 | orchestrator | 2026-02-02 00:17:47.830225 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-02 00:17:48.209625 | orchestrator | ok: [testbed-manager] 2026-02-02 00:17:48.209724 | orchestrator | 2026-02-02 00:17:48.209737 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-02 00:17:48.554430 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:48.554528 | orchestrator | 2026-02-02 00:17:48.554545 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-02 00:17:48.596767 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:17:48.596862 | orchestrator | 2026-02-02 00:17:48.596877 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-02 00:17:48.682128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-02 00:17:48.682245 | orchestrator | 2026-02-02 00:17:48.682270 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-02 00:17:48.741631 | orchestrator | ok: [testbed-manager] 2026-02-02 00:17:48.741718 | orchestrator | 2026-02-02 00:17:48.741731 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-02 00:17:50.777631 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-02 00:17:50.777735 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-02 00:17:50.777752 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-02 00:17:50.777764 | orchestrator | 2026-02-02 00:17:50.777777 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-02 00:17:51.467863 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:51.467924 | orchestrator | 2026-02-02 00:17:51.467990 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-02 00:17:52.223354 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:52.223489 | orchestrator | 2026-02-02 00:17:52.223506 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-02 00:17:52.994706 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:52.994803 | orchestrator | 2026-02-02 00:17:52.994818 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-02 00:17:53.072879 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-02 00:17:53.073038 | orchestrator | 2026-02-02 00:17:53.073054 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-02 00:17:53.118113 | orchestrator | ok: [testbed-manager] 2026-02-02 00:17:53.118207 | orchestrator | 2026-02-02 00:17:53.118222 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-02 00:17:53.814354 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-02 00:17:53.814438 | orchestrator | 2026-02-02 00:17:53.814448 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-02 00:17:53.885601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-02 00:17:53.885694 | orchestrator | 2026-02-02 00:17:53.885709 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-02 00:17:54.621435 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:54.621559 | orchestrator | 2026-02-02 00:17:54.621581 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-02 00:17:55.226390 | orchestrator | ok: [testbed-manager] 2026-02-02 00:17:55.226463 | orchestrator | 2026-02-02 00:17:55.226472 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-02 00:17:55.279425 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:17:55.279508 | orchestrator | 2026-02-02 00:17:55.279519 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-02 00:17:55.340821 | orchestrator | ok: [testbed-manager] 2026-02-02 00:17:55.340908 | orchestrator | 2026-02-02 00:17:55.340918 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-02 00:17:56.227734 | orchestrator | changed: [testbed-manager] 2026-02-02 00:17:56.227881 | orchestrator | 2026-02-02 00:17:56.227899 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-02 00:19:09.549408 | orchestrator | changed: [testbed-manager] 2026-02-02 00:19:09.549555 | orchestrator | 2026-02-02 00:19:09.549574 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-02 00:19:10.519398 | orchestrator | ok: [testbed-manager] 2026-02-02 00:19:10.519489 | orchestrator | 2026-02-02 00:19:10.519504 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-02 00:19:10.574707 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:19:10.574799 | orchestrator | 2026-02-02 00:19:10.574835 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-02 00:19:13.392719 | orchestrator | changed: [testbed-manager] 2026-02-02 00:19:13.392812 | orchestrator | 2026-02-02 00:19:13.392824 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-02 00:19:13.499711 | orchestrator | ok: [testbed-manager] 2026-02-02 00:19:13.499910 | orchestrator | 2026-02-02 00:19:13.499941 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-02 00:19:13.499964 | orchestrator | 2026-02-02 00:19:13.499982 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-02 00:19:13.551286 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:19:13.551413 | orchestrator | 2026-02-02 00:19:13.551440 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-02 00:20:13.597609 | orchestrator | Pausing for 60 seconds 2026-02-02 00:20:13.597715 | orchestrator | changed: [testbed-manager] 2026-02-02 00:20:13.597727 | orchestrator | 2026-02-02 00:20:13.597738 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-02 00:20:16.646006 | orchestrator | changed: [testbed-manager] 2026-02-02 00:20:16.646162 | orchestrator | 2026-02-02 00:20:16.646178 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-02 00:21:18.780106 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-02 00:21:18.780204 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-02 00:21:18.780216 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-02 00:21:18.780225 | orchestrator | changed: [testbed-manager] 2026-02-02 00:21:18.780240 | orchestrator | 2026-02-02 00:21:18.780255 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-02 00:21:30.067015 | orchestrator | changed: [testbed-manager] 2026-02-02 00:21:30.067133 | orchestrator | 2026-02-02 00:21:30.067150 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-02 00:21:30.141868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-02 00:21:30.141945 | orchestrator | 2026-02-02 00:21:30.141953 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-02 00:21:30.141961 | orchestrator | 2026-02-02 00:21:30.141967 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-02 00:21:30.192773 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:21:30.192898 | orchestrator | 2026-02-02 00:21:30.192925 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-02 00:21:30.289135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-02 00:21:30.289230 | orchestrator | 2026-02-02 00:21:30.289246 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-02 00:21:31.108913 | orchestrator | changed: [testbed-manager] 2026-02-02 00:21:31.109041 | orchestrator | 2026-02-02 00:21:31.109070 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-02 00:21:34.473550 | orchestrator | ok: [testbed-manager] 2026-02-02 00:21:34.473652 | orchestrator | 2026-02-02 00:21:34.473670 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-02 00:21:34.547287 | orchestrator | ok: [testbed-manager] => { 2026-02-02 00:21:34.547388 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-02 00:21:34.547416 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-02 00:21:34.547438 | orchestrator | "Checking running containers against expected versions...", 2026-02-02 00:21:34.547451 | orchestrator | "", 2026-02-02 00:21:34.547465 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-02 00:21:34.547484 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-02-02 00:21:34.547504 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.547522 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-02-02 00:21:34.547540 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.547560 | orchestrator | "", 2026-02-02 00:21:34.547581 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-02 00:21:34.547601 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-02-02 00:21:34.547620 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.547632 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-02-02 00:21:34.547643 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.547654 | orchestrator | "", 2026-02-02 00:21:34.547665 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-02 00:21:34.547676 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-02-02 00:21:34.547754 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.547773 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-02-02 00:21:34.547785 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.547797 | orchestrator | "", 2026-02-02 00:21:34.547809 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-02 00:21:34.547820 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-02-02 00:21:34.547831 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.547866 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-02-02 00:21:34.547883 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.547901 | orchestrator | "", 2026-02-02 00:21:34.547920 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-02 00:21:34.547938 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-02-02 00:21:34.547956 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.547974 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-02-02 00:21:34.547993 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.548011 | orchestrator | "", 2026-02-02 00:21:34.548028 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-02 00:21:34.548047 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-02 00:21:34.548065 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.548084 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-02 00:21:34.548104 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.548123 | orchestrator | "", 2026-02-02 00:21:34.548142 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-02 00:21:34.548161 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-02 00:21:34.548180 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.548197 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-02 00:21:34.548215 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.548231 | orchestrator | "", 2026-02-02 00:21:34.548259 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-02 00:21:34.548278 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-02 00:21:34.548294 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.548310 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-02 00:21:34.548328 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.548350 | orchestrator | "", 2026-02-02 00:21:34.548368 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-02 00:21:34.548386 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-02-02 00:21:34.548403 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.548422 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-02-02 00:21:34.548440 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.548459 | orchestrator | "", 2026-02-02 00:21:34.548478 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-02 00:21:34.548496 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-02 00:21:34.548514 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.548531 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-02 00:21:34.548550 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.548568 | orchestrator | "", 2026-02-02 00:21:34.548585 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-02 00:21:34.548604 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-02 00:21:34.548624 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.548643 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-02 00:21:34.548662 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.548707 | orchestrator | "", 2026-02-02 00:21:34.548730 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-02 00:21:34.548750 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-02 00:21:34.548770 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.548789 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-02 00:21:34.548809 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.548828 | orchestrator | "", 2026-02-02 00:21:34.548846 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-02 00:21:34.548866 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-02 00:21:34.548884 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.548902 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-02 00:21:34.548919 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.548958 | orchestrator | "", 2026-02-02 00:21:34.548978 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-02 00:21:34.548996 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-02 00:21:34.549016 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.549035 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-02 00:21:34.549054 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.549073 | orchestrator | "", 2026-02-02 00:21:34.549091 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-02 00:21:34.549134 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-02 00:21:34.549154 | orchestrator | " Enabled: true", 2026-02-02 00:21:34.549172 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-02 00:21:34.549190 | orchestrator | " Status: ✅ MATCH", 2026-02-02 00:21:34.549209 | orchestrator | "", 2026-02-02 00:21:34.549226 | orchestrator | "=== Summary ===", 2026-02-02 00:21:34.549242 | orchestrator | "Errors (version mismatches): 0", 2026-02-02 00:21:34.549258 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-02 00:21:34.549273 | orchestrator | "", 2026-02-02 00:21:34.549288 | orchestrator | "✅ All running containers match expected versions!" 2026-02-02 00:21:34.549304 | orchestrator | ] 2026-02-02 00:21:34.549321 | orchestrator | } 2026-02-02 00:21:34.549338 | orchestrator | 2026-02-02 00:21:34.549355 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-02 00:21:34.602738 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:21:34.602825 | orchestrator | 2026-02-02 00:21:34.602839 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:21:34.602854 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-02 00:21:34.602866 | orchestrator | 2026-02-02 00:21:34.715784 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-02 00:21:34.715845 | orchestrator | + deactivate 2026-02-02 00:21:34.715854 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-02 00:21:34.715862 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 00:21:34.715869 | orchestrator | + export PATH 2026-02-02 00:21:34.715876 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-02 00:21:34.715883 | orchestrator | + '[' -n '' ']' 2026-02-02 00:21:34.715890 | orchestrator | + hash -r 2026-02-02 00:21:34.715896 | orchestrator | + '[' -n '' ']' 2026-02-02 00:21:34.715903 | orchestrator | + unset VIRTUAL_ENV 2026-02-02 00:21:34.715910 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-02 00:21:34.715917 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-02 00:21:34.716320 | orchestrator | + unset -f deactivate 2026-02-02 00:21:34.716337 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-02 00:21:34.721813 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-02 00:21:34.721864 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-02 00:21:34.721873 | orchestrator | + local max_attempts=60 2026-02-02 00:21:34.721882 | orchestrator | + local name=ceph-ansible 2026-02-02 00:21:34.721891 | orchestrator | + local attempt_num=1 2026-02-02 00:21:34.722228 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:21:34.745282 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-02 00:21:34.745375 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-02 00:21:34.745390 | orchestrator | + local max_attempts=60 2026-02-02 00:21:34.745402 | orchestrator | + local name=kolla-ansible 2026-02-02 00:21:34.745412 | orchestrator | + local attempt_num=1 2026-02-02 00:21:34.745817 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-02 00:21:34.768957 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-02 00:21:34.769031 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-02 00:21:34.769045 | orchestrator | + local max_attempts=60 2026-02-02 00:21:34.769057 | orchestrator | + local name=osism-ansible 2026-02-02 00:21:34.769068 | orchestrator | + local attempt_num=1 2026-02-02 00:21:34.769406 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-02 00:21:34.801276 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-02 00:21:34.801349 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-02 00:21:34.801469 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-02 00:21:35.436233 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-02 00:21:35.603187 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-02 00:21:35.603255 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-02 00:21:35.603265 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-02 00:21:35.603274 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-02 00:21:35.603284 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-02 00:21:35.603293 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-02 00:21:35.603301 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-02 00:21:35.603324 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-02 00:21:35.603333 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-02 00:21:35.603341 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-02 00:21:35.603349 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-02 00:21:35.603357 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-02 00:21:35.603365 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-02 00:21:35.603373 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-02 00:21:35.603381 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-02 00:21:35.603389 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-02 00:21:35.609794 | orchestrator | ++ semver latest 7.0.0 2026-02-02 00:21:35.660952 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-02 00:21:35.661053 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-02 00:21:35.661070 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-02 00:21:35.664456 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-02 00:21:47.596095 | orchestrator | 2026-02-02 00:21:47 | INFO  | Prepare task for execution of resolvconf. 2026-02-02 00:21:47.798907 | orchestrator | 2026-02-02 00:21:47 | INFO  | Task ebaaf03a-3b25-4802-ba8e-36d62f75de0e (resolvconf) was prepared for execution. 2026-02-02 00:21:47.799041 | orchestrator | 2026-02-02 00:21:47 | INFO  | It takes a moment until task ebaaf03a-3b25-4802-ba8e-36d62f75de0e (resolvconf) has been started and output is visible here. 2026-02-02 00:22:02.149271 | orchestrator | 2026-02-02 00:22:02.149386 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-02 00:22:02.149404 | orchestrator | 2026-02-02 00:22:02.149417 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 00:22:02.149429 | orchestrator | Monday 02 February 2026 00:21:51 +0000 (0:00:00.144) 0:00:00.144 ******* 2026-02-02 00:22:02.149440 | orchestrator | ok: [testbed-manager] 2026-02-02 00:22:02.149452 | orchestrator | 2026-02-02 00:22:02.149463 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-02 00:22:02.149476 | orchestrator | Monday 02 February 2026 00:21:55 +0000 (0:00:03.857) 0:00:04.001 ******* 2026-02-02 00:22:02.149487 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:22:02.149499 | orchestrator | 2026-02-02 00:22:02.149510 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-02 00:22:02.149521 | orchestrator | Monday 02 February 2026 00:21:55 +0000 (0:00:00.068) 0:00:04.070 ******* 2026-02-02 00:22:02.149532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-02 00:22:02.149544 | orchestrator | 2026-02-02 00:22:02.149555 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-02 00:22:02.149576 | orchestrator | Monday 02 February 2026 00:21:55 +0000 (0:00:00.080) 0:00:04.150 ******* 2026-02-02 00:22:02.149588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 00:22:02.149599 | orchestrator | 2026-02-02 00:22:02.149610 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-02 00:22:02.149621 | orchestrator | Monday 02 February 2026 00:21:56 +0000 (0:00:00.085) 0:00:04.236 ******* 2026-02-02 00:22:02.149633 | orchestrator | ok: [testbed-manager] 2026-02-02 00:22:02.149644 | orchestrator | 2026-02-02 00:22:02.149722 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-02 00:22:02.149735 | orchestrator | Monday 02 February 2026 00:21:57 +0000 (0:00:01.182) 0:00:05.418 ******* 2026-02-02 00:22:02.149746 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:22:02.149757 | orchestrator | 2026-02-02 00:22:02.149768 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-02 00:22:02.149779 | orchestrator | Monday 02 February 2026 00:21:57 +0000 (0:00:00.059) 0:00:05.477 ******* 2026-02-02 00:22:02.149790 | orchestrator | ok: [testbed-manager] 2026-02-02 00:22:02.149803 | orchestrator | 2026-02-02 00:22:02.149817 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-02 00:22:02.149830 | orchestrator | Monday 02 February 2026 00:21:57 +0000 (0:00:00.488) 0:00:05.966 ******* 2026-02-02 00:22:02.149843 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:22:02.149857 | orchestrator | 2026-02-02 00:22:02.149870 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-02 00:22:02.149885 | orchestrator | Monday 02 February 2026 00:21:57 +0000 (0:00:00.069) 0:00:06.035 ******* 2026-02-02 00:22:02.149898 | orchestrator | changed: [testbed-manager] 2026-02-02 00:22:02.149909 | orchestrator | 2026-02-02 00:22:02.149920 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-02 00:22:02.149931 | orchestrator | Monday 02 February 2026 00:21:58 +0000 (0:00:00.565) 0:00:06.601 ******* 2026-02-02 00:22:02.149942 | orchestrator | changed: [testbed-manager] 2026-02-02 00:22:02.149953 | orchestrator | 2026-02-02 00:22:02.149988 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-02 00:22:02.149999 | orchestrator | Monday 02 February 2026 00:21:59 +0000 (0:00:01.176) 0:00:07.777 ******* 2026-02-02 00:22:02.150010 | orchestrator | ok: [testbed-manager] 2026-02-02 00:22:02.150080 | orchestrator | 2026-02-02 00:22:02.150092 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-02 00:22:02.150103 | orchestrator | Monday 02 February 2026 00:22:00 +0000 (0:00:00.972) 0:00:08.750 ******* 2026-02-02 00:22:02.150114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-02 00:22:02.150125 | orchestrator | 2026-02-02 00:22:02.150135 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-02 00:22:02.150146 | orchestrator | Monday 02 February 2026 00:22:00 +0000 (0:00:00.083) 0:00:08.833 ******* 2026-02-02 00:22:02.150158 | orchestrator | changed: [testbed-manager] 2026-02-02 00:22:02.150169 | orchestrator | 2026-02-02 00:22:02.150179 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:22:02.150192 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 00:22:02.150203 | orchestrator | 2026-02-02 00:22:02.150214 | orchestrator | 2026-02-02 00:22:02.150225 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:22:02.150235 | orchestrator | Monday 02 February 2026 00:22:01 +0000 (0:00:01.211) 0:00:10.045 ******* 2026-02-02 00:22:02.150246 | orchestrator | =============================================================================== 2026-02-02 00:22:02.150257 | orchestrator | Gathering Facts --------------------------------------------------------- 3.86s 2026-02-02 00:22:02.150268 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.21s 2026-02-02 00:22:02.150279 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.18s 2026-02-02 00:22:02.150290 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.18s 2026-02-02 00:22:02.150300 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.97s 2026-02-02 00:22:02.150311 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2026-02-02 00:22:02.150342 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2026-02-02 00:22:02.150354 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-02-02 00:22:02.150364 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-02-02 00:22:02.150375 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-02-02 00:22:02.150386 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-02-02 00:22:02.150403 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-02-02 00:22:02.150415 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-02-02 00:22:02.469966 | orchestrator | + osism apply sshconfig 2026-02-02 00:22:14.527086 | orchestrator | 2026-02-02 00:22:14 | INFO  | Prepare task for execution of sshconfig. 2026-02-02 00:22:14.603963 | orchestrator | 2026-02-02 00:22:14 | INFO  | Task 250d1cff-567d-41f2-bc83-045e35b64336 (sshconfig) was prepared for execution. 2026-02-02 00:22:14.604077 | orchestrator | 2026-02-02 00:22:14 | INFO  | It takes a moment until task 250d1cff-567d-41f2-bc83-045e35b64336 (sshconfig) has been started and output is visible here. 2026-02-02 00:22:26.618843 | orchestrator | 2026-02-02 00:22:26.618952 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-02 00:22:26.618969 | orchestrator | 2026-02-02 00:22:26.618977 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-02 00:22:26.618987 | orchestrator | Monday 02 February 2026 00:22:18 +0000 (0:00:00.170) 0:00:00.170 ******* 2026-02-02 00:22:26.619019 | orchestrator | ok: [testbed-manager] 2026-02-02 00:22:26.619030 | orchestrator | 2026-02-02 00:22:26.619039 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-02 00:22:26.619048 | orchestrator | Monday 02 February 2026 00:22:19 +0000 (0:00:00.528) 0:00:00.699 ******* 2026-02-02 00:22:26.619056 | orchestrator | changed: [testbed-manager] 2026-02-02 00:22:26.619067 | orchestrator | 2026-02-02 00:22:26.619075 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-02 00:22:26.619082 | orchestrator | Monday 02 February 2026 00:22:19 +0000 (0:00:00.482) 0:00:01.182 ******* 2026-02-02 00:22:26.619091 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-02 00:22:26.619101 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-02 00:22:26.619109 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-02 00:22:26.619118 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-02 00:22:26.619127 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-02 00:22:26.619136 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-02 00:22:26.619145 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-02 00:22:26.619153 | orchestrator | 2026-02-02 00:22:26.619161 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-02 00:22:26.619170 | orchestrator | Monday 02 February 2026 00:22:25 +0000 (0:00:05.720) 0:00:06.902 ******* 2026-02-02 00:22:26.619178 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:22:26.619186 | orchestrator | 2026-02-02 00:22:26.619194 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-02 00:22:26.619203 | orchestrator | Monday 02 February 2026 00:22:25 +0000 (0:00:00.078) 0:00:06.980 ******* 2026-02-02 00:22:26.619212 | orchestrator | changed: [testbed-manager] 2026-02-02 00:22:26.619221 | orchestrator | 2026-02-02 00:22:26.619230 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:22:26.619240 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:22:26.619249 | orchestrator | 2026-02-02 00:22:26.619258 | orchestrator | 2026-02-02 00:22:26.619266 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:22:26.619275 | orchestrator | Monday 02 February 2026 00:22:26 +0000 (0:00:00.601) 0:00:07.582 ******* 2026-02-02 00:22:26.619297 | orchestrator | =============================================================================== 2026-02-02 00:22:26.619304 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.72s 2026-02-02 00:22:26.619312 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2026-02-02 00:22:26.619319 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.53s 2026-02-02 00:22:26.619326 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.48s 2026-02-02 00:22:26.619334 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-02-02 00:22:26.975032 | orchestrator | + osism apply known-hosts 2026-02-02 00:22:38.989990 | orchestrator | 2026-02-02 00:22:38 | INFO  | Prepare task for execution of known-hosts. 2026-02-02 00:22:39.057486 | orchestrator | 2026-02-02 00:22:39 | INFO  | Task 07e7f938-bec8-47ce-8343-2e4e131d8f9a (known-hosts) was prepared for execution. 2026-02-02 00:22:39.057572 | orchestrator | 2026-02-02 00:22:39 | INFO  | It takes a moment until task 07e7f938-bec8-47ce-8343-2e4e131d8f9a (known-hosts) has been started and output is visible here. 2026-02-02 00:22:54.324602 | orchestrator | 2026-02-02 00:22:54.324741 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-02 00:22:54.324752 | orchestrator | 2026-02-02 00:22:54.324759 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-02 00:22:54.324787 | orchestrator | Monday 02 February 2026 00:22:42 +0000 (0:00:00.124) 0:00:00.124 ******* 2026-02-02 00:22:54.324795 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-02 00:22:54.324802 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-02 00:22:54.324808 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-02 00:22:54.324814 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-02 00:22:54.324821 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-02 00:22:54.324827 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-02 00:22:54.324841 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-02 00:22:54.324847 | orchestrator | 2026-02-02 00:22:54.324854 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-02 00:22:54.324862 | orchestrator | Monday 02 February 2026 00:22:48 +0000 (0:00:05.920) 0:00:06.045 ******* 2026-02-02 00:22:54.324869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-02 00:22:54.324878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-02 00:22:54.324884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-02 00:22:54.324890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-02 00:22:54.324896 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-02 00:22:54.324902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-02 00:22:54.324908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-02 00:22:54.324915 | orchestrator | 2026-02-02 00:22:54.324921 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 00:22:54.324927 | orchestrator | Monday 02 February 2026 00:22:48 +0000 (0:00:00.163) 0:00:06.208 ******* 2026-02-02 00:22:54.324935 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBnlnL/NGfr5bnSq79mfOa0EonYwZyVolaRdN5eJ+Be5QfMKbPJNENnamcFKPuLVpLSx3jcQMM15TKKM7nUgolJ6+EVm5OEtnnHq9pdbVA/KkBQr+Akz/59qccHmPB6SlOlleH2NPhJloGbHB4AsqnOmA0+to/bk+hXi/OMkz4796GBvj+aysLIJ3fTI87ne6IhWbIjtrydS2t8EdXldbPbuJTJoX/l3XjP+2ZZYSIyytgB6nO2lJkU2f1LjwNk3uV8N/80u74TgYw6uMvP1KVwepVq4UDTgkIRDLvqFVGmdAs1XcwNmkzOsqZugA4m3EDzO4iGMq6T7TdWANuvf8X/fegC41AzGyNO5BsCBHRlnlTPMzdp1uPpuoJvOhI0ZAk89bZUsUyHAZW2SEG6QaTb/zcry9UbzxNLZAqJW0nSJFcGUV6tdLNj2tKRbAvgPUsUWnjVpZj2OfehNLsTCTXUg4S6SbGiqR6UoWh+i7V8JcTRWpoqXc4WxfnnTHndWs=) 2026-02-02 00:22:54.324944 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP4HA04odZzme52S/oy83qQVNJYHZtSKj/QgBfUwMI2GBsC2Xmpm6Js77cMwr2h3qrOFYd2fq07Pp2fXVWWTXAU=) 2026-02-02 00:22:54.324952 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL55BwRChSjlToB+DEtZjOxUIOujMsJUpPAeKfUbUTrQ) 2026-02-02 00:22:54.324961 | orchestrator | 2026-02-02 00:22:54.324967 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 00:22:54.324973 | orchestrator | Monday 02 February 2026 00:22:49 +0000 (0:00:01.262) 0:00:07.471 ******* 2026-02-02 00:22:54.324985 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDQuXln3f6EDS/5/lj8+w1t2j8bRlrXZ+2ww94t93/O4gQ0jXPM7dYaOUHdFF0b31nKp2rQSgmGfVM1wsR/iqvc=) 2026-02-02 00:22:54.324991 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGT3yH2ROyp/pOKlizqIOJVDkrbeC/MWGG22KZtXLEXv) 2026-02-02 00:22:54.325016 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2n6zhLVk/UALqd1ffwof6UchjVda4qD3frRAi+VPz1r/AL2ZAIF30U4IMRcFhRS5OFHPMcPHnGr6QaOK4xsPC69hZCJ8MYNYOtoV0mcjLhJpPcCNlIS28agEQACRvcHEQlD6kJ/EP3Sql19q1ApdqkeV+RTURDtsmDWWUy12wKTlKRCY7SdRWI6XqEwsN0D7hQ6PI/CSTurBOowxglOasRpoBKSE4UBG6hlBeZSmY9flgYW3yB0d3wQNMFtQoPrLCxdCTIDHE7HtBooJP5seGcHBdFMB7jpcnkL+wLXnxPZ1KU0CJUIgHP6FFjzxU5MErGmic13DhJD231eg+L3sXu86n3NrIEC6CKxSxJ/OUhGrhnMc5SCWpaki4TmUZz15q/IeIXK+UlzEkFEicl+Qqh8qwfcauC0/SV8pzVthF+Pu4sp5QJBB8/S4ZewLnmPK23UBLIkagq4vlBr3tV3srnlsXEJewA8eai87AGbRRufgOiLPYP6Q4Icu0G/ZggDc=) 2026-02-02 00:22:54.325024 | orchestrator | 2026-02-02 00:22:54.325030 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 00:22:54.325036 | orchestrator | Monday 02 February 2026 00:22:50 +0000 (0:00:01.120) 0:00:08.591 ******* 2026-02-02 00:22:54.325043 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOL/qODFeU7cBCQWNnVZPIz50ZLIu60pwHPISonstgqw) 2026-02-02 00:22:54.325096 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMU4kPm3rtsj5UdW1io6uLmI9V3h9EGwphPc+I2OIqVvTPP9i9YjaPL6WTaIhO+3hq+LpERPilNTYUW0cFm4+I4/2a9FfTk+ocWYrDh3usqFTc2q7LTQ2U43G3NwwrSJf/OgKq4DMbuS7OL3/YeYWnp2l2PYPv4iYBOVJYhtgwCVWRC+dqRoKrvMyTj8pdXJ8gbrXBzw96VN1NB4oconcq57qR6vpw8NgwT2oORCemzRUWr/ejwgnj2Q+0mhelxYAncBYR1odynYl6WWQAonpr0F6Fy4zbLRdofTE1rY9om7Q+st3TGHqyoCmAuUljJUQXHHU7CTG9UPsJKKe48OZnP2NHV8PzoQZIwd2i9krXw3wtzd5ZGi/XnLM72OKFX/C6H4vPfSXJ7bnWWDnDwnx6DTO8OG2wHeO1mdIvGXuglHaPF8CyAu95+DzIEyF1l7IsTm8PT8iuSVU7urOa1FXWyfNuJI2Cm1Ge4fQ+3pgjVRWi+1X/ZpOaVgaRUQUK50U=) 2026-02-02 00:22:54.325104 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN1n+HLaNqdqy/35NMBd/k84TXXv0b1S6ESsrqRkestl0GZ1T9XxGRnSj2Q7fyojIZWCs2wsg5xuXj0b+BGZ/e4=) 2026-02-02 00:22:54.325110 | orchestrator | 2026-02-02 00:22:54.325116 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 00:22:54.325126 | orchestrator | Monday 02 February 2026 00:22:51 +0000 (0:00:01.099) 0:00:09.691 ******* 2026-02-02 00:22:54.325151 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDo3SPbZcr8o0wd/cnbJxtSseZphWG6ty35zTHYu6w9QjK6jLj1DrDMJFPRvGVCGOKJP9F3jH5aGDNoNmITa4MAD2ybtac9q0pQ3/sgzbYv4ndFkYjnpSwNQeYJ6LiVyE45xY9m+9mx3nCfckcUSmAb9HhPvcyC2dA+nT5fAtwTmiZH29WNpbFyst4B25hE7U+WwuzVpqH33SJzzSXLhtlk0qZ6yrjMJ8jqHfpI+6rnBqYcViG5wZJZXOEcAYpBYyRtbDbibUH3cAQ4ukPiqpxJEe2NPEXsNGZYf7G83Sm0Sc1geW7xQQkBv29Sf4W5zq0YJO+f5Oi002VZq7CoSb/JmU4F3p25P933CYhUy6BbPkOgkb4YLDlSv+9bpYLT35Gp7iHmSKH7EiatjXhyJi6hM+iEA2DgDj4Vo+1sYTdVdyEHLZxstwQ/tOFKIihqGdwRsysPhNUXHCg/dm9MJTlfFAjYf/bdbLnP2h8zkn7aDDmc6tvDz+Y/6V1rhQzZbQk=) 2026-02-02 00:22:54.325167 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDKWfBPR/yySvwNUDGRtuQThX45y3jg+wzm7SkcWJnQyqDR0kLAv+rJV3v40CPaK6tMj22lvGjyhI33N42X22Zk=) 2026-02-02 00:22:54.325175 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH6HpR0GBteSdZG4JYkqeRe2q5pL4Mab3q33jlGUp+w3) 2026-02-02 00:22:54.325183 | orchestrator | 2026-02-02 00:22:54.325191 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 00:22:54.325199 | orchestrator | Monday 02 February 2026 00:22:52 +0000 (0:00:01.071) 0:00:10.762 ******* 2026-02-02 00:22:54.325206 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMFOOa9tcJkc5WW2h38FVGBGE2cWMUNtD3kuBqdVJcw45K6mMKCWc3QP+MxzbPbJwZ7ttXp3dd7jgmdsBd7poEJyvCyvwo8FKGFG/0JvY2yEL9e9wMlvRm7EswwxusUQ4XUR0t+ceISaH7MBlC41zh7BUu34orHVImvelctKeTMsUpBsqlBq1X4eMqyWq9lehMfSMZsFuqYMFrT8PcXKRuOa4RFxn9Gg3qikUc+F5jxn19nnpqEnGo46Nqu2cuDiNUZTloWQrmFV5zQDwsqiUByE0WEB5rI4tAfvbkFDZCIwKlhw21N17mzgzmjKaoL/uu6w5ABeWhnSuf6jKzbyUIYTuHgUwHuRrzmTTTRZHrGufEwc3Kovch8P+I/KPqDKx/Xq/1JDvLzrlqLcIXZ7KgdOkjFMkWYEUJRZaCY9Y+4TH7+yf7dht66aK9/K9bpmwawSUQWs8qooS5+Qi70w/aTj53RjDiNb0+adhK03IL5+gAetSFYHMA04Nk5Y0FINU=) 2026-02-02 00:22:54.325219 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMCPFI6caQK+bbJ5yLbjNDMNo+CPDThXMpqH705ANQThuBi+Gsn6XIZ5CyWl3sgXhq/cuAF91OpfnlA/nvu0hxA=) 2026-02-02 00:22:54.325226 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM6yUC/R552mJmXsBfLEMk0MSjMRAHGKIrMyHVHO6sjB) 2026-02-02 00:22:54.325234 | orchestrator | 2026-02-02 00:22:54.325241 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 00:22:54.325249 | orchestrator | Monday 02 February 2026 00:22:53 +0000 (0:00:01.074) 0:00:11.837 ******* 2026-02-02 00:22:54.325263 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZbYtiWK7ZewGmRy58xBk9Xew4e6mWxU/+RBTP2zo0UsD/hXw3qIB+PcndTeKoSIbv2Vf/kmqAWPZcHLzFnmkgRGDNNzGXuVJhIeLx7Or0CKsltZjmGCqmHt5H3JL/96T5bgfr49pVhKkg3yUsDp5YbO7LXJFs5DS/m6DhWmisr6evkPI3h39RsLsA9lBSEsjP92iBegcSBmkxIU/FbL1FtsE2YcKiyBxPbNcqsSnrqfx1EBeNa4oPazHg1HF9iJMhf5EDXThH2C+/xOskByXXB6iEDpj7wdvxFLHf6dN6J6PjbATuuw5tEirQOTZqdFVY08uSiICBQmDCuxMDT7nZhquH6SK9IJSLV7HYbi3hiL2E9KgvMVT/UDFe2PCs3/oPAXL0yum4kEoDf/nhchLheRgXLpfqnybwnZw7KpzzLvYFnI9yKONYl0PFVp5mHw9UGOuIwTOVwfKIx4ZppvEhZS0ZE+rbQ6DUMCIIjeqJ7PKAv75nWw31zFkBpzCofyk=) 2026-02-02 00:23:06.180916 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG0UyxYman5KPlDRpYQ79N7tohxkSu66XYPButeex9pUfi+6l6saO2hyH16OASpXuFFbuOAV3QCi8K6zLNQeLwg=) 2026-02-02 00:23:06.181061 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJjSqKARJNlgD4UDwp7YQFKoJLZi5IlmueM+FhswIxq) 2026-02-02 00:23:06.181080 | orchestrator | 2026-02-02 00:23:06.181094 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 00:23:06.181109 | orchestrator | Monday 02 February 2026 00:22:55 +0000 (0:00:01.095) 0:00:12.932 ******* 2026-02-02 00:23:06.181122 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINPbDDw+AmhcLyWNdnqm51asK1CfdAA2WuIfaJTSYTMD) 2026-02-02 00:23:06.181137 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDASdUEG1ocSQVwPDaXTNBUCKYs26m7nHAJz7c7s81Bbq+Pd4HTWzf4OhCsGsr33Gs1q3rFxXn9JxGhO+brqbJKi1PMX9f6RacozMEuj1FYx95Xk4WmaQbCl3DeVQfHPvrVfsTDBa3kI2v4X6CFhzdXDVhaBYTlTA3IHuxG/ehwfmuvTjb2HJgZPusj2aChOy6/I3YWLgGn4GVTG7bklwQadLxakF3Z8T7UdHyATWDitm9qI1SpvSObQw/AXDGVGFivQfrU0DZ8/RBHuqCifXhX9UL0p5m73oZTokdTCBG7y/Vyvh5525M5Im/cg3TIpvdtwVgu6FZH7IcHRb1loC2KGtJ2kJj7V9hioGasHuYK0rl5LbQ+elIH6gYRl85uqrqIJsVRRre4CIRXp7p76pquwCKeb0LGO7JxiXzh3A1wfi2pOd4NkNH4R96nPeWdL741utyMuf5BujPqnTkcSX7P7kxO4mogWS60b1bYGJpOATWA6WHExIbgF/UbYi7XxiU=) 2026-02-02 00:23:06.181152 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNlHzJNu2o94puwRm62N/I13SkE9br6XZZmzUGjvfJzy87MUQWp1CY4YGsiF2ubmrFPbn299g2ZdY5i4/WbHQkE=) 2026-02-02 00:23:06.181164 | orchestrator | 2026-02-02 00:23:06.181177 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-02 00:23:06.181191 | orchestrator | Monday 02 February 2026 00:22:56 +0000 (0:00:01.068) 0:00:14.001 ******* 2026-02-02 00:23:06.181202 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-02 00:23:06.181237 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-02 00:23:06.181248 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-02 00:23:06.181286 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-02 00:23:06.181298 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-02 00:23:06.181308 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-02 00:23:06.181319 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-02 00:23:06.181331 | orchestrator | 2026-02-02 00:23:06.181345 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-02 00:23:06.181360 | orchestrator | Monday 02 February 2026 00:23:01 +0000 (0:00:05.441) 0:00:19.442 ******* 2026-02-02 00:23:06.181374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-02 00:23:06.181389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-02 00:23:06.181401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-02 00:23:06.181413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-02 00:23:06.181424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-02 00:23:06.181436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-02 00:23:06.181447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-02 00:23:06.181458 | orchestrator | 2026-02-02 00:23:06.181470 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 00:23:06.181481 | orchestrator | Monday 02 February 2026 00:23:01 +0000 (0:00:00.198) 0:00:19.641 ******* 2026-02-02 00:23:06.181493 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL55BwRChSjlToB+DEtZjOxUIOujMsJUpPAeKfUbUTrQ) 2026-02-02 00:23:06.181547 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBnlnL/NGfr5bnSq79mfOa0EonYwZyVolaRdN5eJ+Be5QfMKbPJNENnamcFKPuLVpLSx3jcQMM15TKKM7nUgolJ6+EVm5OEtnnHq9pdbVA/KkBQr+Akz/59qccHmPB6SlOlleH2NPhJloGbHB4AsqnOmA0+to/bk+hXi/OMkz4796GBvj+aysLIJ3fTI87ne6IhWbIjtrydS2t8EdXldbPbuJTJoX/l3XjP+2ZZYSIyytgB6nO2lJkU2f1LjwNk3uV8N/80u74TgYw6uMvP1KVwepVq4UDTgkIRDLvqFVGmdAs1XcwNmkzOsqZugA4m3EDzO4iGMq6T7TdWANuvf8X/fegC41AzGyNO5BsCBHRlnlTPMzdp1uPpuoJvOhI0ZAk89bZUsUyHAZW2SEG6QaTb/zcry9UbzxNLZAqJW0nSJFcGUV6tdLNj2tKRbAvgPUsUWnjVpZj2OfehNLsTCTXUg4S6SbGiqR6UoWh+i7V8JcTRWpoqXc4WxfnnTHndWs=) 2026-02-02 00:23:06.181561 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP4HA04odZzme52S/oy83qQVNJYHZtSKj/QgBfUwMI2GBsC2Xmpm6Js77cMwr2h3qrOFYd2fq07Pp2fXVWWTXAU=) 2026-02-02 00:23:06.181573 | orchestrator | 2026-02-02 00:23:06.181585 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 00:23:06.181628 | orchestrator | Monday 02 February 2026 00:23:02 +0000 (0:00:01.130) 0:00:20.772 ******* 2026-02-02 00:23:06.181639 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDQuXln3f6EDS/5/lj8+w1t2j8bRlrXZ+2ww94t93/O4gQ0jXPM7dYaOUHdFF0b31nKp2rQSgmGfVM1wsR/iqvc=) 2026-02-02 00:23:06.181650 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2n6zhLVk/UALqd1ffwof6UchjVda4qD3frRAi+VPz1r/AL2ZAIF30U4IMRcFhRS5OFHPMcPHnGr6QaOK4xsPC69hZCJ8MYNYOtoV0mcjLhJpPcCNlIS28agEQACRvcHEQlD6kJ/EP3Sql19q1ApdqkeV+RTURDtsmDWWUy12wKTlKRCY7SdRWI6XqEwsN0D7hQ6PI/CSTurBOowxglOasRpoBKSE4UBG6hlBeZSmY9flgYW3yB0d3wQNMFtQoPrLCxdCTIDHE7HtBooJP5seGcHBdFMB7jpcnkL+wLXnxPZ1KU0CJUIgHP6FFjzxU5MErGmic13DhJD231eg+L3sXu86n3NrIEC6CKxSxJ/OUhGrhnMc5SCWpaki4TmUZz15q/IeIXK+UlzEkFEicl+Qqh8qwfcauC0/SV8pzVthF+Pu4sp5QJBB8/S4ZewLnmPK23UBLIkagq4vlBr3tV3srnlsXEJewA8eai87AGbRRufgOiLPYP6Q4Icu0G/ZggDc=) 2026-02-02 00:23:06.181675 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGT3yH2ROyp/pOKlizqIOJVDkrbeC/MWGG22KZtXLEXv) 2026-02-02 00:23:06.181686 | orchestrator | 2026-02-02 00:23:06.181698 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 00:23:06.181710 | orchestrator | Monday 02 February 2026 00:23:03 +0000 (0:00:01.123) 0:00:21.895 ******* 2026-02-02 00:23:06.181721 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN1n+HLaNqdqy/35NMBd/k84TXXv0b1S6ESsrqRkestl0GZ1T9XxGRnSj2Q7fyojIZWCs2wsg5xuXj0b+BGZ/e4=) 2026-02-02 00:23:06.181733 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMU4kPm3rtsj5UdW1io6uLmI9V3h9EGwphPc+I2OIqVvTPP9i9YjaPL6WTaIhO+3hq+LpERPilNTYUW0cFm4+I4/2a9FfTk+ocWYrDh3usqFTc2q7LTQ2U43G3NwwrSJf/OgKq4DMbuS7OL3/YeYWnp2l2PYPv4iYBOVJYhtgwCVWRC+dqRoKrvMyTj8pdXJ8gbrXBzw96VN1NB4oconcq57qR6vpw8NgwT2oORCemzRUWr/ejwgnj2Q+0mhelxYAncBYR1odynYl6WWQAonpr0F6Fy4zbLRdofTE1rY9om7Q+st3TGHqyoCmAuUljJUQXHHU7CTG9UPsJKKe48OZnP2NHV8PzoQZIwd2i9krXw3wtzd5ZGi/XnLM72OKFX/C6H4vPfSXJ7bnWWDnDwnx6DTO8OG2wHeO1mdIvGXuglHaPF8CyAu95+DzIEyF1l7IsTm8PT8iuSVU7urOa1FXWyfNuJI2Cm1Ge4fQ+3pgjVRWi+1X/ZpOaVgaRUQUK50U=) 2026-02-02 00:23:06.181745 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOL/qODFeU7cBCQWNnVZPIz50ZLIu60pwHPISonstgqw) 2026-02-02 00:23:06.181756 | orchestrator | 2026-02-02 00:23:06.181767 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 00:23:06.181777 | orchestrator | Monday 02 February 2026 00:23:05 +0000 (0:00:01.118) 0:00:23.014 ******* 2026-02-02 00:23:06.181787 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDKWfBPR/yySvwNUDGRtuQThX45y3jg+wzm7SkcWJnQyqDR0kLAv+rJV3v40CPaK6tMj22lvGjyhI33N42X22Zk=) 2026-02-02 00:23:06.181806 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDo3SPbZcr8o0wd/cnbJxtSseZphWG6ty35zTHYu6w9QjK6jLj1DrDMJFPRvGVCGOKJP9F3jH5aGDNoNmITa4MAD2ybtac9q0pQ3/sgzbYv4ndFkYjnpSwNQeYJ6LiVyE45xY9m+9mx3nCfckcUSmAb9HhPvcyC2dA+nT5fAtwTmiZH29WNpbFyst4B25hE7U+WwuzVpqH33SJzzSXLhtlk0qZ6yrjMJ8jqHfpI+6rnBqYcViG5wZJZXOEcAYpBYyRtbDbibUH3cAQ4ukPiqpxJEe2NPEXsNGZYf7G83Sm0Sc1geW7xQQkBv29Sf4W5zq0YJO+f5Oi002VZq7CoSb/JmU4F3p25P933CYhUy6BbPkOgkb4YLDlSv+9bpYLT35Gp7iHmSKH7EiatjXhyJi6hM+iEA2DgDj4Vo+1sYTdVdyEHLZxstwQ/tOFKIihqGdwRsysPhNUXHCg/dm9MJTlfFAjYf/bdbLnP2h8zkn7aDDmc6tvDz+Y/6V1rhQzZbQk=) 2026-02-02 00:23:06.181831 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH6HpR0GBteSdZG4JYkqeRe2q5pL4Mab3q33jlGUp+w3) 2026-02-02 00:23:10.816573 | orchestrator | 2026-02-02 00:23:10.816723 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 00:23:10.816731 | orchestrator | Monday 02 February 2026 00:23:06 +0000 (0:00:01.129) 0:00:24.143 ******* 2026-02-02 00:23:10.816739 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMFOOa9tcJkc5WW2h38FVGBGE2cWMUNtD3kuBqdVJcw45K6mMKCWc3QP+MxzbPbJwZ7ttXp3dd7jgmdsBd7poEJyvCyvwo8FKGFG/0JvY2yEL9e9wMlvRm7EswwxusUQ4XUR0t+ceISaH7MBlC41zh7BUu34orHVImvelctKeTMsUpBsqlBq1X4eMqyWq9lehMfSMZsFuqYMFrT8PcXKRuOa4RFxn9Gg3qikUc+F5jxn19nnpqEnGo46Nqu2cuDiNUZTloWQrmFV5zQDwsqiUByE0WEB5rI4tAfvbkFDZCIwKlhw21N17mzgzmjKaoL/uu6w5ABeWhnSuf6jKzbyUIYTuHgUwHuRrzmTTTRZHrGufEwc3Kovch8P+I/KPqDKx/Xq/1JDvLzrlqLcIXZ7KgdOkjFMkWYEUJRZaCY9Y+4TH7+yf7dht66aK9/K9bpmwawSUQWs8qooS5+Qi70w/aTj53RjDiNb0+adhK03IL5+gAetSFYHMA04Nk5Y0FINU=) 2026-02-02 00:23:10.816770 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMCPFI6caQK+bbJ5yLbjNDMNo+CPDThXMpqH705ANQThuBi+Gsn6XIZ5CyWl3sgXhq/cuAF91OpfnlA/nvu0hxA=) 2026-02-02 00:23:10.816778 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM6yUC/R552mJmXsBfLEMk0MSjMRAHGKIrMyHVHO6sjB) 2026-02-02 00:23:10.816784 | orchestrator | 2026-02-02 00:23:10.816803 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 00:23:10.816807 | orchestrator | Monday 02 February 2026 00:23:07 +0000 (0:00:01.103) 0:00:25.247 ******* 2026-02-02 00:23:10.816811 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZbYtiWK7ZewGmRy58xBk9Xew4e6mWxU/+RBTP2zo0UsD/hXw3qIB+PcndTeKoSIbv2Vf/kmqAWPZcHLzFnmkgRGDNNzGXuVJhIeLx7Or0CKsltZjmGCqmHt5H3JL/96T5bgfr49pVhKkg3yUsDp5YbO7LXJFs5DS/m6DhWmisr6evkPI3h39RsLsA9lBSEsjP92iBegcSBmkxIU/FbL1FtsE2YcKiyBxPbNcqsSnrqfx1EBeNa4oPazHg1HF9iJMhf5EDXThH2C+/xOskByXXB6iEDpj7wdvxFLHf6dN6J6PjbATuuw5tEirQOTZqdFVY08uSiICBQmDCuxMDT7nZhquH6SK9IJSLV7HYbi3hiL2E9KgvMVT/UDFe2PCs3/oPAXL0yum4kEoDf/nhchLheRgXLpfqnybwnZw7KpzzLvYFnI9yKONYl0PFVp5mHw9UGOuIwTOVwfKIx4ZppvEhZS0ZE+rbQ6DUMCIIjeqJ7PKAv75nWw31zFkBpzCofyk=) 2026-02-02 00:23:10.816815 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJjSqKARJNlgD4UDwp7YQFKoJLZi5IlmueM+FhswIxq) 2026-02-02 00:23:10.816819 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG0UyxYman5KPlDRpYQ79N7tohxkSu66XYPButeex9pUfi+6l6saO2hyH16OASpXuFFbuOAV3QCi8K6zLNQeLwg=) 2026-02-02 00:23:10.816823 | orchestrator | 2026-02-02 00:23:10.816827 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 00:23:10.816831 | orchestrator | Monday 02 February 2026 00:23:08 +0000 (0:00:01.076) 0:00:26.324 ******* 2026-02-02 00:23:10.816835 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINPbDDw+AmhcLyWNdnqm51asK1CfdAA2WuIfaJTSYTMD) 2026-02-02 00:23:10.816839 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDASdUEG1ocSQVwPDaXTNBUCKYs26m7nHAJz7c7s81Bbq+Pd4HTWzf4OhCsGsr33Gs1q3rFxXn9JxGhO+brqbJKi1PMX9f6RacozMEuj1FYx95Xk4WmaQbCl3DeVQfHPvrVfsTDBa3kI2v4X6CFhzdXDVhaBYTlTA3IHuxG/ehwfmuvTjb2HJgZPusj2aChOy6/I3YWLgGn4GVTG7bklwQadLxakF3Z8T7UdHyATWDitm9qI1SpvSObQw/AXDGVGFivQfrU0DZ8/RBHuqCifXhX9UL0p5m73oZTokdTCBG7y/Vyvh5525M5Im/cg3TIpvdtwVgu6FZH7IcHRb1loC2KGtJ2kJj7V9hioGasHuYK0rl5LbQ+elIH6gYRl85uqrqIJsVRRre4CIRXp7p76pquwCKeb0LGO7JxiXzh3A1wfi2pOd4NkNH4R96nPeWdL741utyMuf5BujPqnTkcSX7P7kxO4mogWS60b1bYGJpOATWA6WHExIbgF/UbYi7XxiU=) 2026-02-02 00:23:10.816843 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNlHzJNu2o94puwRm62N/I13SkE9br6XZZmzUGjvfJzy87MUQWp1CY4YGsiF2ubmrFPbn299g2ZdY5i4/WbHQkE=) 2026-02-02 00:23:10.816847 | orchestrator | 2026-02-02 00:23:10.816851 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-02 00:23:10.816855 | orchestrator | Monday 02 February 2026 00:23:09 +0000 (0:00:01.110) 0:00:27.434 ******* 2026-02-02 00:23:10.816859 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-02 00:23:10.816864 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-02 00:23:10.816868 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-02 00:23:10.816872 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-02 00:23:10.816876 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-02 00:23:10.816879 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-02 00:23:10.816883 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-02 00:23:10.816891 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:23:10.816896 | orchestrator | 2026-02-02 00:23:10.816914 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-02 00:23:10.816918 | orchestrator | Monday 02 February 2026 00:23:09 +0000 (0:00:00.158) 0:00:27.592 ******* 2026-02-02 00:23:10.816922 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:23:10.816926 | orchestrator | 2026-02-02 00:23:10.816929 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-02 00:23:10.816933 | orchestrator | Monday 02 February 2026 00:23:09 +0000 (0:00:00.066) 0:00:27.658 ******* 2026-02-02 00:23:10.816937 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:23:10.816941 | orchestrator | 2026-02-02 00:23:10.816944 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-02 00:23:10.816948 | orchestrator | Monday 02 February 2026 00:23:09 +0000 (0:00:00.054) 0:00:27.713 ******* 2026-02-02 00:23:10.816952 | orchestrator | changed: [testbed-manager] 2026-02-02 00:23:10.816956 | orchestrator | 2026-02-02 00:23:10.816960 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:23:10.816964 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 00:23:10.816970 | orchestrator | 2026-02-02 00:23:10.816973 | orchestrator | 2026-02-02 00:23:10.816977 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:23:10.816981 | orchestrator | Monday 02 February 2026 00:23:10 +0000 (0:00:00.791) 0:00:28.504 ******* 2026-02-02 00:23:10.816985 | orchestrator | =============================================================================== 2026-02-02 00:23:10.816989 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.92s 2026-02-02 00:23:10.816993 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.44s 2026-02-02 00:23:10.816997 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.26s 2026-02-02 00:23:10.817002 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-02 00:23:10.817006 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-02 00:23:10.817010 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-02 00:23:10.817013 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-02 00:23:10.817017 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-02 00:23:10.817021 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-02 00:23:10.817024 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-02 00:23:10.817028 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-02 00:23:10.817032 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-02 00:23:10.817036 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-02-02 00:23:10.817039 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-02 00:23:10.817048 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-02 00:23:10.817052 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-02 00:23:10.817056 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.79s 2026-02-02 00:23:10.817059 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2026-02-02 00:23:10.817064 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-02-02 00:23:10.817068 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-02-02 00:23:11.183476 | orchestrator | + osism apply squid 2026-02-02 00:23:23.233686 | orchestrator | 2026-02-02 00:23:23 | INFO  | Prepare task for execution of squid. 2026-02-02 00:23:23.309740 | orchestrator | 2026-02-02 00:23:23 | INFO  | Task 69144b98-2356-42d0-acfd-038f3d37d69c (squid) was prepared for execution. 2026-02-02 00:23:23.309834 | orchestrator | 2026-02-02 00:23:23 | INFO  | It takes a moment until task 69144b98-2356-42d0-acfd-038f3d37d69c (squid) has been started and output is visible here. 2026-02-02 00:25:20.263897 | orchestrator | 2026-02-02 00:25:20.264045 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-02 00:25:20.264073 | orchestrator | 2026-02-02 00:25:20.264092 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-02 00:25:20.264111 | orchestrator | Monday 02 February 2026 00:23:27 +0000 (0:00:00.164) 0:00:00.164 ******* 2026-02-02 00:25:20.264131 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 00:25:20.264150 | orchestrator | 2026-02-02 00:25:20.264169 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-02 00:25:20.264189 | orchestrator | Monday 02 February 2026 00:23:27 +0000 (0:00:00.090) 0:00:00.255 ******* 2026-02-02 00:25:20.264209 | orchestrator | ok: [testbed-manager] 2026-02-02 00:25:20.264229 | orchestrator | 2026-02-02 00:25:20.264242 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-02 00:25:20.264253 | orchestrator | Monday 02 February 2026 00:23:29 +0000 (0:00:01.529) 0:00:01.784 ******* 2026-02-02 00:25:20.264264 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-02 00:25:20.264275 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-02 00:25:20.264286 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-02 00:25:20.264297 | orchestrator | 2026-02-02 00:25:20.264330 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-02 00:25:20.264341 | orchestrator | Monday 02 February 2026 00:23:30 +0000 (0:00:01.220) 0:00:03.004 ******* 2026-02-02 00:25:20.264352 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-02 00:25:20.264364 | orchestrator | 2026-02-02 00:25:20.264375 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-02 00:25:20.264386 | orchestrator | Monday 02 February 2026 00:23:31 +0000 (0:00:01.103) 0:00:04.108 ******* 2026-02-02 00:25:20.264397 | orchestrator | ok: [testbed-manager] 2026-02-02 00:25:20.264408 | orchestrator | 2026-02-02 00:25:20.264419 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-02 00:25:20.264430 | orchestrator | Monday 02 February 2026 00:23:31 +0000 (0:00:00.355) 0:00:04.464 ******* 2026-02-02 00:25:20.264441 | orchestrator | changed: [testbed-manager] 2026-02-02 00:25:20.264452 | orchestrator | 2026-02-02 00:25:20.264463 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-02 00:25:20.264532 | orchestrator | Monday 02 February 2026 00:23:32 +0000 (0:00:00.949) 0:00:05.413 ******* 2026-02-02 00:25:20.264548 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-02 00:25:20.264560 | orchestrator | ok: [testbed-manager] 2026-02-02 00:25:20.264571 | orchestrator | 2026-02-02 00:25:20.264582 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-02 00:25:20.264593 | orchestrator | Monday 02 February 2026 00:24:07 +0000 (0:00:34.528) 0:00:39.942 ******* 2026-02-02 00:25:20.264604 | orchestrator | changed: [testbed-manager] 2026-02-02 00:25:20.264615 | orchestrator | 2026-02-02 00:25:20.264643 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-02 00:25:20.264655 | orchestrator | Monday 02 February 2026 00:24:19 +0000 (0:00:11.945) 0:00:51.887 ******* 2026-02-02 00:25:20.264666 | orchestrator | Pausing for 60 seconds 2026-02-02 00:25:20.264677 | orchestrator | changed: [testbed-manager] 2026-02-02 00:25:20.264688 | orchestrator | 2026-02-02 00:25:20.264700 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-02 00:25:20.264738 | orchestrator | Monday 02 February 2026 00:25:19 +0000 (0:01:00.085) 0:01:51.972 ******* 2026-02-02 00:25:20.264749 | orchestrator | ok: [testbed-manager] 2026-02-02 00:25:20.264760 | orchestrator | 2026-02-02 00:25:20.264771 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-02 00:25:20.264782 | orchestrator | Monday 02 February 2026 00:25:19 +0000 (0:00:00.069) 0:01:52.042 ******* 2026-02-02 00:25:20.264793 | orchestrator | changed: [testbed-manager] 2026-02-02 00:25:20.264804 | orchestrator | 2026-02-02 00:25:20.264815 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:25:20.264827 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:25:20.264838 | orchestrator | 2026-02-02 00:25:20.264849 | orchestrator | 2026-02-02 00:25:20.264860 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:25:20.264870 | orchestrator | Monday 02 February 2026 00:25:19 +0000 (0:00:00.608) 0:01:52.650 ******* 2026-02-02 00:25:20.264881 | orchestrator | =============================================================================== 2026-02-02 00:25:20.264892 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-02-02 00:25:20.264903 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 34.53s 2026-02-02 00:25:20.264913 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.95s 2026-02-02 00:25:20.264924 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.53s 2026-02-02 00:25:20.264935 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2026-02-02 00:25:20.264945 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.10s 2026-02-02 00:25:20.264956 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2026-02-02 00:25:20.264966 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2026-02-02 00:25:20.264977 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2026-02-02 00:25:20.264988 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-02-02 00:25:20.264998 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-02-02 00:25:20.602193 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-02 00:25:20.602310 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-02-02 00:25:20.608544 | orchestrator | + set -e 2026-02-02 00:25:20.608612 | orchestrator | + NAMESPACE=kolla 2026-02-02 00:25:20.608627 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-02 00:25:20.613888 | orchestrator | ++ semver latest 9.0.0 2026-02-02 00:25:20.665458 | orchestrator | + [[ -1 -lt 0 ]] 2026-02-02 00:25:20.665616 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-02 00:25:20.666288 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-02 00:25:32.793994 | orchestrator | 2026-02-02 00:25:32 | INFO  | Prepare task for execution of operator. 2026-02-02 00:25:32.859547 | orchestrator | 2026-02-02 00:25:32 | INFO  | Task 0548298b-627d-4a26-b708-9e8e1eb6ff79 (operator) was prepared for execution. 2026-02-02 00:25:32.859645 | orchestrator | 2026-02-02 00:25:32 | INFO  | It takes a moment until task 0548298b-627d-4a26-b708-9e8e1eb6ff79 (operator) has been started and output is visible here. 2026-02-02 00:25:49.182630 | orchestrator | 2026-02-02 00:25:49.182739 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-02 00:25:49.182757 | orchestrator | 2026-02-02 00:25:49.182770 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 00:25:49.182783 | orchestrator | Monday 02 February 2026 00:25:37 +0000 (0:00:00.151) 0:00:00.152 ******* 2026-02-02 00:25:49.182794 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:25:49.182807 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:25:49.182818 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:25:49.182859 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:25:49.182871 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:25:49.182882 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:25:49.182893 | orchestrator | 2026-02-02 00:25:49.182904 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-02 00:25:49.182915 | orchestrator | Monday 02 February 2026 00:25:40 +0000 (0:00:03.370) 0:00:03.522 ******* 2026-02-02 00:25:49.182926 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:25:49.182937 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:25:49.182947 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:25:49.182958 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:25:49.182969 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:25:49.182979 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:25:49.182990 | orchestrator | 2026-02-02 00:25:49.183000 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-02 00:25:49.183011 | orchestrator | 2026-02-02 00:25:49.183022 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-02 00:25:49.183033 | orchestrator | Monday 02 February 2026 00:25:41 +0000 (0:00:00.798) 0:00:04.321 ******* 2026-02-02 00:25:49.183044 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:25:49.183054 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:25:49.183065 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:25:49.183076 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:25:49.183086 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:25:49.183097 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:25:49.183107 | orchestrator | 2026-02-02 00:25:49.183118 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-02 00:25:49.183129 | orchestrator | Monday 02 February 2026 00:25:41 +0000 (0:00:00.171) 0:00:04.493 ******* 2026-02-02 00:25:49.183140 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:25:49.183153 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:25:49.183166 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:25:49.183178 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:25:49.183190 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:25:49.183203 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:25:49.183215 | orchestrator | 2026-02-02 00:25:49.183246 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-02 00:25:49.183259 | orchestrator | Monday 02 February 2026 00:25:41 +0000 (0:00:00.174) 0:00:04.667 ******* 2026-02-02 00:25:49.183272 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:25:49.183286 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:25:49.183298 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:25:49.183311 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:25:49.183324 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:25:49.183337 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:25:49.183349 | orchestrator | 2026-02-02 00:25:49.183362 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-02 00:25:49.183375 | orchestrator | Monday 02 February 2026 00:25:42 +0000 (0:00:00.619) 0:00:05.287 ******* 2026-02-02 00:25:49.183387 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:25:49.183399 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:25:49.183412 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:25:49.183426 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:25:49.183439 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:25:49.183481 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:25:49.183503 | orchestrator | 2026-02-02 00:25:49.183517 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-02 00:25:49.183527 | orchestrator | Monday 02 February 2026 00:25:43 +0000 (0:00:00.824) 0:00:06.111 ******* 2026-02-02 00:25:49.183538 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-02 00:25:49.183550 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-02 00:25:49.183561 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-02 00:25:49.183571 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-02 00:25:49.183582 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-02 00:25:49.183603 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-02 00:25:49.183614 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-02 00:25:49.183625 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-02 00:25:49.183636 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-02 00:25:49.183647 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-02 00:25:49.183657 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-02 00:25:49.183668 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-02 00:25:49.183679 | orchestrator | 2026-02-02 00:25:49.183690 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-02 00:25:49.183701 | orchestrator | Monday 02 February 2026 00:25:44 +0000 (0:00:01.288) 0:00:07.400 ******* 2026-02-02 00:25:49.183712 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:25:49.183723 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:25:49.183734 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:25:49.183744 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:25:49.183755 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:25:49.183766 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:25:49.183777 | orchestrator | 2026-02-02 00:25:49.183788 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-02 00:25:49.183799 | orchestrator | Monday 02 February 2026 00:25:45 +0000 (0:00:01.199) 0:00:08.599 ******* 2026-02-02 00:25:49.183810 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-02 00:25:49.183847 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-02 00:25:49.183859 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-02 00:25:49.183870 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-02 00:25:49.183881 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-02 00:25:49.183911 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-02 00:25:49.183922 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-02 00:25:49.183933 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-02 00:25:49.183944 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-02 00:25:49.183955 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-02 00:25:49.183965 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-02 00:25:49.183976 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-02 00:25:49.183987 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-02 00:25:49.183998 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-02 00:25:49.184009 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-02 00:25:49.184019 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-02 00:25:49.184030 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-02 00:25:49.184041 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-02 00:25:49.184052 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-02 00:25:49.184062 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-02 00:25:49.184073 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-02 00:25:49.184084 | orchestrator | 2026-02-02 00:25:49.184095 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-02 00:25:49.184106 | orchestrator | Monday 02 February 2026 00:25:46 +0000 (0:00:01.390) 0:00:09.990 ******* 2026-02-02 00:25:49.184117 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:25:49.184128 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:25:49.184145 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:25:49.184163 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:25:49.184174 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:25:49.184185 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:25:49.184195 | orchestrator | 2026-02-02 00:25:49.184206 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-02 00:25:49.184217 | orchestrator | Monday 02 February 2026 00:25:47 +0000 (0:00:00.144) 0:00:10.134 ******* 2026-02-02 00:25:49.184228 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:25:49.184239 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:25:49.184250 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:25:49.184260 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:25:49.184271 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:25:49.184282 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:25:49.184292 | orchestrator | 2026-02-02 00:25:49.184304 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-02 00:25:49.184315 | orchestrator | Monday 02 February 2026 00:25:47 +0000 (0:00:00.231) 0:00:10.366 ******* 2026-02-02 00:25:49.184325 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:25:49.184336 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:25:49.184347 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:25:49.184357 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:25:49.184368 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:25:49.184379 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:25:49.184389 | orchestrator | 2026-02-02 00:25:49.184400 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-02 00:25:49.184411 | orchestrator | Monday 02 February 2026 00:25:47 +0000 (0:00:00.608) 0:00:10.974 ******* 2026-02-02 00:25:49.184422 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:25:49.184433 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:25:49.184444 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:25:49.184499 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:25:49.184510 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:25:49.184521 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:25:49.184532 | orchestrator | 2026-02-02 00:25:49.184543 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-02 00:25:49.184554 | orchestrator | Monday 02 February 2026 00:25:48 +0000 (0:00:00.184) 0:00:11.159 ******* 2026-02-02 00:25:49.184565 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-02 00:25:49.184575 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:25:49.184586 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-02 00:25:49.184596 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:25:49.184607 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 00:25:49.184618 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:25:49.184628 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-02 00:25:49.184639 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:25:49.184650 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-02 00:25:49.184661 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:25:49.184671 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-02 00:25:49.184682 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:25:49.184693 | orchestrator | 2026-02-02 00:25:49.184704 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-02 00:25:49.184715 | orchestrator | Monday 02 February 2026 00:25:48 +0000 (0:00:00.728) 0:00:11.887 ******* 2026-02-02 00:25:49.184725 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:25:49.184736 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:25:49.184747 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:25:49.184757 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:25:49.184768 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:25:49.184779 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:25:49.184790 | orchestrator | 2026-02-02 00:25:49.184801 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-02 00:25:49.184811 | orchestrator | Monday 02 February 2026 00:25:49 +0000 (0:00:00.160) 0:00:12.048 ******* 2026-02-02 00:25:49.184829 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:25:49.184840 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:25:49.184851 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:25:49.184862 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:25:49.184879 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:25:50.678872 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:25:50.678960 | orchestrator | 2026-02-02 00:25:50.678972 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-02 00:25:50.678983 | orchestrator | Monday 02 February 2026 00:25:49 +0000 (0:00:00.174) 0:00:12.222 ******* 2026-02-02 00:25:50.678992 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:25:50.679001 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:25:50.679010 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:25:50.679019 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:25:50.679027 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:25:50.679036 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:25:50.679044 | orchestrator | 2026-02-02 00:25:50.679053 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-02 00:25:50.679062 | orchestrator | Monday 02 February 2026 00:25:49 +0000 (0:00:00.173) 0:00:12.396 ******* 2026-02-02 00:25:50.679071 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:25:50.679079 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:25:50.679088 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:25:50.679096 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:25:50.679105 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:25:50.679113 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:25:50.679122 | orchestrator | 2026-02-02 00:25:50.679130 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-02 00:25:50.679139 | orchestrator | Monday 02 February 2026 00:25:50 +0000 (0:00:00.701) 0:00:13.098 ******* 2026-02-02 00:25:50.679148 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:25:50.679156 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:25:50.679165 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:25:50.679173 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:25:50.679182 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:25:50.679190 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:25:50.679199 | orchestrator | 2026-02-02 00:25:50.679208 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:25:50.679218 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 00:25:50.679229 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 00:25:50.679258 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 00:25:50.679267 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 00:25:50.679276 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 00:25:50.679284 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 00:25:50.679293 | orchestrator | 2026-02-02 00:25:50.679302 | orchestrator | 2026-02-02 00:25:50.679310 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:25:50.679319 | orchestrator | Monday 02 February 2026 00:25:50 +0000 (0:00:00.270) 0:00:13.368 ******* 2026-02-02 00:25:50.679327 | orchestrator | =============================================================================== 2026-02-02 00:25:50.679356 | orchestrator | Gathering Facts --------------------------------------------------------- 3.37s 2026-02-02 00:25:50.679365 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.39s 2026-02-02 00:25:50.679374 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.29s 2026-02-02 00:25:50.679383 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.20s 2026-02-02 00:25:50.679391 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.82s 2026-02-02 00:25:50.679412 | orchestrator | Do not require tty for all users ---------------------------------------- 0.80s 2026-02-02 00:25:50.679421 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2026-02-02 00:25:50.679431 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.70s 2026-02-02 00:25:50.679442 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2026-02-02 00:25:50.679478 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2026-02-02 00:25:50.679493 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.27s 2026-02-02 00:25:50.679504 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.23s 2026-02-02 00:25:50.679514 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-02-02 00:25:50.679524 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2026-02-02 00:25:50.679534 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-02-02 00:25:50.679544 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-02-02 00:25:50.679554 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-02-02 00:25:50.679565 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-02-02 00:25:50.679575 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-02-02 00:25:51.007377 | orchestrator | + osism apply --environment custom facts 2026-02-02 00:25:53.042935 | orchestrator | 2026-02-02 00:25:53 | INFO  | Trying to run play facts in environment custom 2026-02-02 00:26:03.105165 | orchestrator | 2026-02-02 00:26:03 | INFO  | Prepare task for execution of facts. 2026-02-02 00:26:03.177926 | orchestrator | 2026-02-02 00:26:03 | INFO  | Task 0007eb0f-ecba-4099-8f0a-1f0b77633647 (facts) was prepared for execution. 2026-02-02 00:26:03.178080 | orchestrator | 2026-02-02 00:26:03 | INFO  | It takes a moment until task 0007eb0f-ecba-4099-8f0a-1f0b77633647 (facts) has been started and output is visible here. 2026-02-02 00:26:47.919018 | orchestrator | 2026-02-02 00:26:47.919132 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-02 00:26:47.919149 | orchestrator | 2026-02-02 00:26:47.919162 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-02 00:26:47.919174 | orchestrator | Monday 02 February 2026 00:26:07 +0000 (0:00:00.072) 0:00:00.072 ******* 2026-02-02 00:26:47.919186 | orchestrator | ok: [testbed-manager] 2026-02-02 00:26:47.919198 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:26:47.919210 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:26:47.919221 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:26:47.919231 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:26:47.919242 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:26:47.919253 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:26:47.919264 | orchestrator | 2026-02-02 00:26:47.919275 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-02 00:26:47.919286 | orchestrator | Monday 02 February 2026 00:26:08 +0000 (0:00:01.456) 0:00:01.528 ******* 2026-02-02 00:26:47.919297 | orchestrator | ok: [testbed-manager] 2026-02-02 00:26:47.919309 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:26:47.919320 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:26:47.919356 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:26:47.919382 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:26:47.919393 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:26:47.919465 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:26:47.919478 | orchestrator | 2026-02-02 00:26:47.919489 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-02 00:26:47.919500 | orchestrator | 2026-02-02 00:26:47.919511 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-02 00:26:47.919522 | orchestrator | Monday 02 February 2026 00:26:10 +0000 (0:00:01.221) 0:00:02.750 ******* 2026-02-02 00:26:47.919533 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:26:47.919543 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:26:47.919554 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:26:47.919567 | orchestrator | 2026-02-02 00:26:47.919580 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-02 00:26:47.919594 | orchestrator | Monday 02 February 2026 00:26:10 +0000 (0:00:00.125) 0:00:02.875 ******* 2026-02-02 00:26:47.919607 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:26:47.919619 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:26:47.919631 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:26:47.919643 | orchestrator | 2026-02-02 00:26:47.919655 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-02 00:26:47.919668 | orchestrator | Monday 02 February 2026 00:26:10 +0000 (0:00:00.200) 0:00:03.075 ******* 2026-02-02 00:26:47.919680 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:26:47.919693 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:26:47.919706 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:26:47.919718 | orchestrator | 2026-02-02 00:26:47.919731 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-02 00:26:47.919744 | orchestrator | Monday 02 February 2026 00:26:10 +0000 (0:00:00.208) 0:00:03.284 ******* 2026-02-02 00:26:47.919758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:26:47.919772 | orchestrator | 2026-02-02 00:26:47.919785 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-02 00:26:47.919797 | orchestrator | Monday 02 February 2026 00:26:10 +0000 (0:00:00.145) 0:00:03.429 ******* 2026-02-02 00:26:47.919810 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:26:47.919823 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:26:47.919835 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:26:47.919848 | orchestrator | 2026-02-02 00:26:47.919862 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-02 00:26:47.919874 | orchestrator | Monday 02 February 2026 00:26:11 +0000 (0:00:00.475) 0:00:03.904 ******* 2026-02-02 00:26:47.919887 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:26:47.919899 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:26:47.919911 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:26:47.919925 | orchestrator | 2026-02-02 00:26:47.919938 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-02 00:26:47.919949 | orchestrator | Monday 02 February 2026 00:26:11 +0000 (0:00:00.126) 0:00:04.031 ******* 2026-02-02 00:26:47.919959 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:26:47.919970 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:26:47.919981 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:26:47.919991 | orchestrator | 2026-02-02 00:26:47.920002 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-02 00:26:47.920013 | orchestrator | Monday 02 February 2026 00:26:12 +0000 (0:00:01.083) 0:00:05.114 ******* 2026-02-02 00:26:47.920023 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:26:47.920034 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:26:47.920045 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:26:47.920056 | orchestrator | 2026-02-02 00:26:47.920067 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-02 00:26:47.920078 | orchestrator | Monday 02 February 2026 00:26:12 +0000 (0:00:00.469) 0:00:05.584 ******* 2026-02-02 00:26:47.920098 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:26:47.920109 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:26:47.920120 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:26:47.920131 | orchestrator | 2026-02-02 00:26:47.920142 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-02 00:26:47.920152 | orchestrator | Monday 02 February 2026 00:26:13 +0000 (0:00:01.074) 0:00:06.659 ******* 2026-02-02 00:26:47.920163 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:26:47.920174 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:26:47.920184 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:26:47.920195 | orchestrator | 2026-02-02 00:26:47.920206 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-02 00:26:47.920216 | orchestrator | Monday 02 February 2026 00:26:30 +0000 (0:00:16.605) 0:00:23.265 ******* 2026-02-02 00:26:47.920227 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:26:47.920238 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:26:47.920248 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:26:47.920259 | orchestrator | 2026-02-02 00:26:47.920270 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-02 00:26:47.920298 | orchestrator | Monday 02 February 2026 00:26:30 +0000 (0:00:00.093) 0:00:23.358 ******* 2026-02-02 00:26:47.920310 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:26:47.920321 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:26:47.920332 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:26:47.920342 | orchestrator | 2026-02-02 00:26:47.920354 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-02 00:26:47.920365 | orchestrator | Monday 02 February 2026 00:26:38 +0000 (0:00:08.040) 0:00:31.399 ******* 2026-02-02 00:26:47.920375 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:26:47.920386 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:26:47.920397 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:26:47.920436 | orchestrator | 2026-02-02 00:26:47.920447 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-02 00:26:47.920458 | orchestrator | Monday 02 February 2026 00:26:39 +0000 (0:00:00.477) 0:00:31.876 ******* 2026-02-02 00:26:47.920469 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-02 00:26:47.920480 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-02 00:26:47.920491 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-02 00:26:47.920502 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-02 00:26:47.920512 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-02 00:26:47.920523 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-02 00:26:47.920534 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-02 00:26:47.920544 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-02 00:26:47.920555 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-02 00:26:47.920566 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-02 00:26:47.920576 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-02 00:26:47.920587 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-02 00:26:47.920598 | orchestrator | 2026-02-02 00:26:47.920609 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-02 00:26:47.920619 | orchestrator | Monday 02 February 2026 00:26:42 +0000 (0:00:03.563) 0:00:35.440 ******* 2026-02-02 00:26:47.920630 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:26:47.920641 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:26:47.920652 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:26:47.920663 | orchestrator | 2026-02-02 00:26:47.920673 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-02 00:26:47.920697 | orchestrator | 2026-02-02 00:26:47.920716 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-02 00:26:47.920736 | orchestrator | Monday 02 February 2026 00:26:44 +0000 (0:00:01.321) 0:00:36.761 ******* 2026-02-02 00:26:47.920755 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:26:47.920774 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:26:47.920786 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:26:47.920797 | orchestrator | ok: [testbed-manager] 2026-02-02 00:26:47.920808 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:26:47.920818 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:26:47.920829 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:26:47.920839 | orchestrator | 2026-02-02 00:26:47.920850 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:26:47.920902 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:26:47.920915 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:26:47.920927 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:26:47.920938 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:26:47.920949 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:26:47.920960 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:26:47.920971 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:26:47.920981 | orchestrator | 2026-02-02 00:26:47.920992 | orchestrator | 2026-02-02 00:26:47.921003 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:26:47.921013 | orchestrator | Monday 02 February 2026 00:26:47 +0000 (0:00:03.888) 0:00:40.650 ******* 2026-02-02 00:26:47.921040 | orchestrator | =============================================================================== 2026-02-02 00:26:47.921062 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.61s 2026-02-02 00:26:47.921073 | orchestrator | Install required packages (Debian) -------------------------------------- 8.04s 2026-02-02 00:26:47.921083 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.89s 2026-02-02 00:26:47.921094 | orchestrator | Copy fact files --------------------------------------------------------- 3.56s 2026-02-02 00:26:47.921105 | orchestrator | Create custom facts directory ------------------------------------------- 1.46s 2026-02-02 00:26:47.921116 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.32s 2026-02-02 00:26:47.921135 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2026-02-02 00:26:48.155162 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.08s 2026-02-02 00:26:48.155254 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2026-02-02 00:26:48.155267 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-02-02 00:26:48.155277 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.48s 2026-02-02 00:26:48.155286 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-02-02 00:26:48.155295 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2026-02-02 00:26:48.155303 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2026-02-02 00:26:48.155312 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-02-02 00:26:48.155345 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-02-02 00:26:48.155367 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2026-02-02 00:26:48.155376 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-02-02 00:26:48.491735 | orchestrator | + osism apply bootstrap 2026-02-02 00:27:00.593483 | orchestrator | 2026-02-02 00:27:00 | INFO  | Prepare task for execution of bootstrap. 2026-02-02 00:27:00.667311 | orchestrator | 2026-02-02 00:27:00 | INFO  | Task ff93a1a1-7d46-46c3-8d0a-37181c5cc5d3 (bootstrap) was prepared for execution. 2026-02-02 00:27:00.667446 | orchestrator | 2026-02-02 00:27:00 | INFO  | It takes a moment until task ff93a1a1-7d46-46c3-8d0a-37181c5cc5d3 (bootstrap) has been started and output is visible here. 2026-02-02 00:27:18.299593 | orchestrator | 2026-02-02 00:27:18.299727 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-02 00:27:18.299755 | orchestrator | 2026-02-02 00:27:18.299774 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-02 00:27:18.299794 | orchestrator | Monday 02 February 2026 00:27:05 +0000 (0:00:00.142) 0:00:00.142 ******* 2026-02-02 00:27:18.299814 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:27:18.299835 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:27:18.299854 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:27:18.299872 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:27:18.299890 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:27:18.299908 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:27:18.299926 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:18.299945 | orchestrator | 2026-02-02 00:27:18.299963 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-02 00:27:18.299981 | orchestrator | 2026-02-02 00:27:18.300019 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-02 00:27:18.300039 | orchestrator | Monday 02 February 2026 00:27:05 +0000 (0:00:00.280) 0:00:00.422 ******* 2026-02-02 00:27:18.300057 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:27:18.300075 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:27:18.300093 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:27:18.300112 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:18.300132 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:27:18.300150 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:27:18.300169 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:27:18.300181 | orchestrator | 2026-02-02 00:27:18.300192 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-02 00:27:18.300203 | orchestrator | 2026-02-02 00:27:18.300214 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-02 00:27:18.300225 | orchestrator | Monday 02 February 2026 00:27:09 +0000 (0:00:03.625) 0:00:04.048 ******* 2026-02-02 00:27:18.300237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 00:27:18.300248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 00:27:18.300259 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-02 00:27:18.300270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 00:27:18.300281 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-02 00:27:18.300291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 00:27:18.300302 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 00:27:18.300312 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-02 00:27:18.300323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 00:27:18.300334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 00:27:18.300345 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 00:27:18.300356 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-02 00:27:18.300367 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 00:27:18.300437 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-02 00:27:18.300450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-02 00:27:18.300461 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:27:18.300472 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 00:27:18.300483 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-02 00:27:18.300494 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-02 00:27:18.300504 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-02 00:27:18.300515 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 00:27:18.300526 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-02 00:27:18.300536 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-02 00:27:18.300547 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-02 00:27:18.300558 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-02 00:27:18.300568 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-02 00:27:18.300579 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-02 00:27:18.300590 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-02 00:27:18.300600 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:27:18.300611 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-02 00:27:18.300622 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-02 00:27:18.300632 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-02 00:27:18.300643 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 00:27:18.300654 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-02 00:27:18.300664 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 00:27:18.300675 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-02 00:27:18.300685 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-02 00:27:18.300697 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-02 00:27:18.300708 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:27:18.300719 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 00:27:18.300730 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-02 00:27:18.300740 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-02 00:27:18.300751 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-02 00:27:18.300762 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-02 00:27:18.300772 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-02 00:27:18.300783 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-02 00:27:18.300794 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:27:18.300825 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-02 00:27:18.300837 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-02 00:27:18.300847 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-02 00:27:18.300858 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:27:18.300869 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-02 00:27:18.300880 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:27:18.300891 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-02 00:27:18.300901 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-02 00:27:18.300912 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:27:18.300923 | orchestrator | 2026-02-02 00:27:18.300934 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-02 00:27:18.300944 | orchestrator | 2026-02-02 00:27:18.300955 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-02 00:27:18.300974 | orchestrator | Monday 02 February 2026 00:27:09 +0000 (0:00:00.450) 0:00:04.499 ******* 2026-02-02 00:27:18.300985 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:27:18.300995 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:27:18.301006 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:18.301017 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:27:18.301028 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:27:18.301038 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:27:18.301049 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:27:18.301059 | orchestrator | 2026-02-02 00:27:18.301070 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-02 00:27:18.301081 | orchestrator | Monday 02 February 2026 00:27:11 +0000 (0:00:02.255) 0:00:06.755 ******* 2026-02-02 00:27:18.301092 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:18.301103 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:27:18.301113 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:27:18.301130 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:27:18.301147 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:27:18.301163 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:27:18.301181 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:27:18.301200 | orchestrator | 2026-02-02 00:27:18.301217 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-02 00:27:18.301232 | orchestrator | Monday 02 February 2026 00:27:12 +0000 (0:00:01.279) 0:00:08.034 ******* 2026-02-02 00:27:18.301245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:27:18.301258 | orchestrator | 2026-02-02 00:27:18.301270 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-02 00:27:18.301280 | orchestrator | Monday 02 February 2026 00:27:13 +0000 (0:00:00.331) 0:00:08.366 ******* 2026-02-02 00:27:18.301291 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:27:18.301302 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:27:18.301313 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:27:18.301324 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:27:18.301335 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:27:18.301345 | orchestrator | changed: [testbed-manager] 2026-02-02 00:27:18.301356 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:27:18.301366 | orchestrator | 2026-02-02 00:27:18.301403 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-02 00:27:18.301416 | orchestrator | Monday 02 February 2026 00:27:15 +0000 (0:00:02.131) 0:00:10.498 ******* 2026-02-02 00:27:18.301427 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:27:18.301439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:27:18.301452 | orchestrator | 2026-02-02 00:27:18.301463 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-02 00:27:18.301474 | orchestrator | Monday 02 February 2026 00:27:15 +0000 (0:00:00.262) 0:00:10.760 ******* 2026-02-02 00:27:18.301485 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:27:18.301496 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:27:18.301506 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:27:18.301517 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:27:18.301527 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:27:18.301538 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:27:18.301549 | orchestrator | 2026-02-02 00:27:18.301560 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-02 00:27:18.301570 | orchestrator | Monday 02 February 2026 00:27:16 +0000 (0:00:01.077) 0:00:11.838 ******* 2026-02-02 00:27:18.301581 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:27:18.301592 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:27:18.301611 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:27:18.301639 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:27:18.301651 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:27:18.301662 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:27:18.301672 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:27:18.301683 | orchestrator | 2026-02-02 00:27:18.301694 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-02 00:27:18.301710 | orchestrator | Monday 02 February 2026 00:27:17 +0000 (0:00:00.713) 0:00:12.552 ******* 2026-02-02 00:27:18.301720 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:27:18.301731 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:27:18.301742 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:27:18.301753 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:27:18.301763 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:27:18.301774 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:27:18.301785 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:18.301796 | orchestrator | 2026-02-02 00:27:18.301807 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-02 00:27:18.301818 | orchestrator | Monday 02 February 2026 00:27:18 +0000 (0:00:00.652) 0:00:13.204 ******* 2026-02-02 00:27:18.301829 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:27:18.301840 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:27:18.301860 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:27:31.126482 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:27:31.126598 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:27:31.126615 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:27:31.126627 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:27:31.126639 | orchestrator | 2026-02-02 00:27:31.126651 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-02 00:27:31.126664 | orchestrator | Monday 02 February 2026 00:27:18 +0000 (0:00:00.254) 0:00:13.459 ******* 2026-02-02 00:27:31.126677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:27:31.126705 | orchestrator | 2026-02-02 00:27:31.126717 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-02 00:27:31.126729 | orchestrator | Monday 02 February 2026 00:27:18 +0000 (0:00:00.316) 0:00:13.776 ******* 2026-02-02 00:27:31.126741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:27:31.126752 | orchestrator | 2026-02-02 00:27:31.126763 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-02 00:27:31.126774 | orchestrator | Monday 02 February 2026 00:27:19 +0000 (0:00:00.353) 0:00:14.129 ******* 2026-02-02 00:27:31.126785 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:31.126796 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:27:31.126807 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:27:31.126818 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:27:31.126829 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:27:31.126839 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:27:31.126850 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:27:31.126861 | orchestrator | 2026-02-02 00:27:31.126873 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-02 00:27:31.126883 | orchestrator | Monday 02 February 2026 00:27:20 +0000 (0:00:01.850) 0:00:15.980 ******* 2026-02-02 00:27:31.126894 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:27:31.126905 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:27:31.126916 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:27:31.126927 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:27:31.126946 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:27:31.126998 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:27:31.127021 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:27:31.127040 | orchestrator | 2026-02-02 00:27:31.127060 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-02 00:27:31.127080 | orchestrator | Monday 02 February 2026 00:27:21 +0000 (0:00:00.278) 0:00:16.258 ******* 2026-02-02 00:27:31.127102 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:27:31.127121 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:27:31.127137 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:27:31.127150 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:27:31.127164 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:27:31.127176 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:31.127189 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:27:31.127201 | orchestrator | 2026-02-02 00:27:31.127215 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-02 00:27:31.127227 | orchestrator | Monday 02 February 2026 00:27:21 +0000 (0:00:00.572) 0:00:16.831 ******* 2026-02-02 00:27:31.127240 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:27:31.127253 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:27:31.127266 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:27:31.127279 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:27:31.127291 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:27:31.127304 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:27:31.127316 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:27:31.127329 | orchestrator | 2026-02-02 00:27:31.127340 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-02 00:27:31.127352 | orchestrator | Monday 02 February 2026 00:27:22 +0000 (0:00:00.250) 0:00:17.081 ******* 2026-02-02 00:27:31.127388 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:27:31.127402 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:27:31.127412 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:27:31.127423 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:27:31.127434 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:27:31.127444 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:31.127455 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:27:31.127466 | orchestrator | 2026-02-02 00:27:31.127477 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-02 00:27:31.127488 | orchestrator | Monday 02 February 2026 00:27:22 +0000 (0:00:00.576) 0:00:17.657 ******* 2026-02-02 00:27:31.127499 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:31.127509 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:27:31.127520 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:27:31.127531 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:27:31.127541 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:27:31.127552 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:27:31.127562 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:27:31.127573 | orchestrator | 2026-02-02 00:27:31.127594 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-02 00:27:31.127606 | orchestrator | Monday 02 February 2026 00:27:23 +0000 (0:00:01.104) 0:00:18.762 ******* 2026-02-02 00:27:31.127617 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:27:31.127627 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:31.127638 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:27:31.127649 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:27:31.127659 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:27:31.127670 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:27:31.127681 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:27:31.127691 | orchestrator | 2026-02-02 00:27:31.127702 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-02 00:27:31.127714 | orchestrator | Monday 02 February 2026 00:27:24 +0000 (0:00:01.225) 0:00:19.988 ******* 2026-02-02 00:27:31.127744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:27:31.127771 | orchestrator | 2026-02-02 00:27:31.127790 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-02 00:27:31.127808 | orchestrator | Monday 02 February 2026 00:27:25 +0000 (0:00:00.305) 0:00:20.293 ******* 2026-02-02 00:27:31.127826 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:27:31.127844 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:27:31.127857 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:27:31.127867 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:27:31.127878 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:27:31.127889 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:27:31.127900 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:27:31.127910 | orchestrator | 2026-02-02 00:27:31.127921 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-02 00:27:31.127934 | orchestrator | Monday 02 February 2026 00:27:26 +0000 (0:00:01.384) 0:00:21.678 ******* 2026-02-02 00:27:31.127953 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:27:31.127972 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:27:31.127989 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:27:31.128008 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:27:31.128027 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:27:31.128045 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:27:31.128063 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:31.128074 | orchestrator | 2026-02-02 00:27:31.128086 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-02 00:27:31.128097 | orchestrator | Monday 02 February 2026 00:27:26 +0000 (0:00:00.221) 0:00:21.899 ******* 2026-02-02 00:27:31.128108 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:27:31.128118 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:27:31.128129 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:27:31.128139 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:27:31.128150 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:27:31.128160 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:27:31.128171 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:31.128182 | orchestrator | 2026-02-02 00:27:31.128193 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-02 00:27:31.128204 | orchestrator | Monday 02 February 2026 00:27:27 +0000 (0:00:00.234) 0:00:22.134 ******* 2026-02-02 00:27:31.128214 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:27:31.128225 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:27:31.128235 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:27:31.128246 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:27:31.128256 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:27:31.128267 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:27:31.128278 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:31.128288 | orchestrator | 2026-02-02 00:27:31.128299 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-02 00:27:31.128310 | orchestrator | Monday 02 February 2026 00:27:27 +0000 (0:00:00.261) 0:00:22.395 ******* 2026-02-02 00:27:31.128322 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:27:31.128334 | orchestrator | 2026-02-02 00:27:31.128345 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-02 00:27:31.128355 | orchestrator | Monday 02 February 2026 00:27:27 +0000 (0:00:00.303) 0:00:22.699 ******* 2026-02-02 00:27:31.128410 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:27:31.128424 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:27:31.128435 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:27:31.128446 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:31.128457 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:27:31.128467 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:27:31.128477 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:27:31.128488 | orchestrator | 2026-02-02 00:27:31.128508 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-02 00:27:31.128519 | orchestrator | Monday 02 February 2026 00:27:28 +0000 (0:00:00.548) 0:00:23.247 ******* 2026-02-02 00:27:31.128530 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:27:31.128541 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:27:31.128551 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:27:31.128562 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:27:31.128573 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:27:31.128583 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:27:31.128594 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:27:31.128605 | orchestrator | 2026-02-02 00:27:31.128616 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-02 00:27:31.128626 | orchestrator | Monday 02 February 2026 00:27:28 +0000 (0:00:00.229) 0:00:23.477 ******* 2026-02-02 00:27:31.128637 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:27:31.128648 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:27:31.128658 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:27:31.128669 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:27:31.128680 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:31.128690 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:27:31.128701 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:27:31.128712 | orchestrator | 2026-02-02 00:27:31.128723 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-02 00:27:31.128734 | orchestrator | Monday 02 February 2026 00:27:29 +0000 (0:00:01.110) 0:00:24.588 ******* 2026-02-02 00:27:31.128744 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:27:31.128755 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:27:31.128766 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:27:31.128776 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:27:31.128787 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:27:31.128797 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:27:31.128808 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:31.128818 | orchestrator | 2026-02-02 00:27:31.128829 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-02 00:27:31.128840 | orchestrator | Monday 02 February 2026 00:27:30 +0000 (0:00:00.549) 0:00:25.137 ******* 2026-02-02 00:27:31.128851 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:27:31.128861 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:27:31.128872 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:27:31.128882 | orchestrator | ok: [testbed-manager] 2026-02-02 00:27:31.128903 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:28:13.372368 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:28:13.372503 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:28:13.372520 | orchestrator | 2026-02-02 00:28:13.372532 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-02 00:28:13.372544 | orchestrator | Monday 02 February 2026 00:27:31 +0000 (0:00:01.136) 0:00:26.273 ******* 2026-02-02 00:28:13.372554 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:28:13.372565 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:28:13.372575 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:28:13.372584 | orchestrator | changed: [testbed-manager] 2026-02-02 00:28:13.372594 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:28:13.372604 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:28:13.372614 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:28:13.372625 | orchestrator | 2026-02-02 00:28:13.372635 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-02 00:28:13.372645 | orchestrator | Monday 02 February 2026 00:27:48 +0000 (0:00:16.851) 0:00:43.125 ******* 2026-02-02 00:28:13.372654 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:28:13.372664 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:28:13.372674 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:28:13.372684 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:28:13.372693 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:28:13.372703 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:28:13.372712 | orchestrator | ok: [testbed-manager] 2026-02-02 00:28:13.372744 | orchestrator | 2026-02-02 00:28:13.372754 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-02 00:28:13.372764 | orchestrator | Monday 02 February 2026 00:27:48 +0000 (0:00:00.262) 0:00:43.387 ******* 2026-02-02 00:28:13.372774 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:28:13.372783 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:28:13.372793 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:28:13.372825 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:28:13.372835 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:28:13.372845 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:28:13.372856 | orchestrator | ok: [testbed-manager] 2026-02-02 00:28:13.372867 | orchestrator | 2026-02-02 00:28:13.372879 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-02 00:28:13.372912 | orchestrator | Monday 02 February 2026 00:27:48 +0000 (0:00:00.252) 0:00:43.640 ******* 2026-02-02 00:28:13.372929 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:28:13.372945 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:28:13.372961 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:28:13.372992 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:28:13.373009 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:28:13.373026 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:28:13.373044 | orchestrator | ok: [testbed-manager] 2026-02-02 00:28:13.373061 | orchestrator | 2026-02-02 00:28:13.373079 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-02 00:28:13.373094 | orchestrator | Monday 02 February 2026 00:27:48 +0000 (0:00:00.231) 0:00:43.871 ******* 2026-02-02 00:28:13.373109 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:28:13.373122 | orchestrator | 2026-02-02 00:28:13.373132 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-02 00:28:13.373142 | orchestrator | Monday 02 February 2026 00:27:49 +0000 (0:00:00.290) 0:00:44.161 ******* 2026-02-02 00:28:13.373151 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:28:13.373161 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:28:13.373171 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:28:13.373180 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:28:13.373189 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:28:13.373199 | orchestrator | ok: [testbed-manager] 2026-02-02 00:28:13.373208 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:28:13.373218 | orchestrator | 2026-02-02 00:28:13.373228 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-02 00:28:13.373237 | orchestrator | Monday 02 February 2026 00:27:50 +0000 (0:00:01.721) 0:00:45.883 ******* 2026-02-02 00:28:13.373247 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:28:13.373257 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:28:13.373267 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:28:13.373295 | orchestrator | changed: [testbed-manager] 2026-02-02 00:28:13.373306 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:28:13.373315 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:28:13.373325 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:28:13.373386 | orchestrator | 2026-02-02 00:28:13.373397 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-02 00:28:13.373407 | orchestrator | Monday 02 February 2026 00:27:51 +0000 (0:00:01.013) 0:00:46.897 ******* 2026-02-02 00:28:13.373416 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:28:13.373426 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:28:13.373436 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:28:13.373445 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:28:13.373454 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:28:13.373464 | orchestrator | ok: [testbed-manager] 2026-02-02 00:28:13.373474 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:28:13.373483 | orchestrator | 2026-02-02 00:28:13.373493 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-02 00:28:13.373513 | orchestrator | Monday 02 February 2026 00:27:52 +0000 (0:00:00.874) 0:00:47.771 ******* 2026-02-02 00:28:13.373528 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:28:13.373540 | orchestrator | 2026-02-02 00:28:13.373549 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-02 00:28:13.373560 | orchestrator | Monday 02 February 2026 00:27:53 +0000 (0:00:00.299) 0:00:48.071 ******* 2026-02-02 00:28:13.373569 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:28:13.373579 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:28:13.373589 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:28:13.373598 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:28:13.373608 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:28:13.373618 | orchestrator | changed: [testbed-manager] 2026-02-02 00:28:13.373627 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:28:13.373637 | orchestrator | 2026-02-02 00:28:13.373666 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-02 00:28:13.373677 | orchestrator | Monday 02 February 2026 00:27:54 +0000 (0:00:01.050) 0:00:49.121 ******* 2026-02-02 00:28:13.373687 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:28:13.373696 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:28:13.373706 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:28:13.373716 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:28:13.373725 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:28:13.373735 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:28:13.373744 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:28:13.373754 | orchestrator | 2026-02-02 00:28:13.373763 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-02 00:28:13.373773 | orchestrator | Monday 02 February 2026 00:27:54 +0000 (0:00:00.249) 0:00:49.371 ******* 2026-02-02 00:28:13.373783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:28:13.373793 | orchestrator | 2026-02-02 00:28:13.373803 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-02 00:28:13.373812 | orchestrator | Monday 02 February 2026 00:27:54 +0000 (0:00:00.319) 0:00:49.690 ******* 2026-02-02 00:28:13.373822 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:28:13.373832 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:28:13.373841 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:28:13.373851 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:28:13.373860 | orchestrator | ok: [testbed-manager] 2026-02-02 00:28:13.373869 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:28:13.373879 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:28:13.373888 | orchestrator | 2026-02-02 00:28:13.373898 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-02 00:28:13.373908 | orchestrator | Monday 02 February 2026 00:27:56 +0000 (0:00:01.750) 0:00:51.441 ******* 2026-02-02 00:28:13.373917 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:28:13.373927 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:28:13.373937 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:28:13.373946 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:28:13.373956 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:28:13.373965 | orchestrator | changed: [testbed-manager] 2026-02-02 00:28:13.373975 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:28:13.373984 | orchestrator | 2026-02-02 00:28:13.373994 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-02 00:28:13.374004 | orchestrator | Monday 02 February 2026 00:27:57 +0000 (0:00:01.115) 0:00:52.557 ******* 2026-02-02 00:28:13.374069 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:28:13.374082 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:28:13.374098 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:28:13.374108 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:28:13.374118 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:28:13.374128 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:28:13.374137 | orchestrator | changed: [testbed-manager] 2026-02-02 00:28:13.374147 | orchestrator | 2026-02-02 00:28:13.374157 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-02 00:28:13.374167 | orchestrator | Monday 02 February 2026 00:28:10 +0000 (0:00:12.849) 0:01:05.406 ******* 2026-02-02 00:28:13.374177 | orchestrator | ok: [testbed-manager] 2026-02-02 00:28:13.374186 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:28:13.374196 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:28:13.374206 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:28:13.374216 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:28:13.374225 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:28:13.374235 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:28:13.374244 | orchestrator | 2026-02-02 00:28:13.374254 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-02 00:28:13.374264 | orchestrator | Monday 02 February 2026 00:28:11 +0000 (0:00:01.279) 0:01:06.686 ******* 2026-02-02 00:28:13.374274 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:28:13.374283 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:28:13.374293 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:28:13.374303 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:28:13.374312 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:28:13.374322 | orchestrator | ok: [testbed-manager] 2026-02-02 00:28:13.374363 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:28:13.374381 | orchestrator | 2026-02-02 00:28:13.374397 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-02 00:28:13.374414 | orchestrator | Monday 02 February 2026 00:28:12 +0000 (0:00:00.913) 0:01:07.599 ******* 2026-02-02 00:28:13.374430 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:28:13.374441 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:28:13.374450 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:28:13.374459 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:28:13.374469 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:28:13.374478 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:28:13.374488 | orchestrator | ok: [testbed-manager] 2026-02-02 00:28:13.374497 | orchestrator | 2026-02-02 00:28:13.374507 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-02 00:28:13.374518 | orchestrator | Monday 02 February 2026 00:28:12 +0000 (0:00:00.239) 0:01:07.838 ******* 2026-02-02 00:28:13.374527 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:28:13.374537 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:28:13.374552 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:28:13.374561 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:28:13.374571 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:28:13.374580 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:28:13.374590 | orchestrator | ok: [testbed-manager] 2026-02-02 00:28:13.374599 | orchestrator | 2026-02-02 00:28:13.374609 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-02 00:28:13.374618 | orchestrator | Monday 02 February 2026 00:28:13 +0000 (0:00:00.270) 0:01:08.108 ******* 2026-02-02 00:28:13.374628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:28:13.374639 | orchestrator | 2026-02-02 00:28:13.374657 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-02 00:30:34.978845 | orchestrator | Monday 02 February 2026 00:28:13 +0000 (0:00:00.291) 0:01:08.399 ******* 2026-02-02 00:30:34.978960 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:30:34.978977 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:30:34.978989 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:30:34.979000 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:30:34.979036 | orchestrator | ok: [testbed-manager] 2026-02-02 00:30:34.979047 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:30:34.979058 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:30:34.979069 | orchestrator | 2026-02-02 00:30:34.979080 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-02 00:30:34.979091 | orchestrator | Monday 02 February 2026 00:28:15 +0000 (0:00:01.874) 0:01:10.274 ******* 2026-02-02 00:30:34.979103 | orchestrator | changed: [testbed-manager] 2026-02-02 00:30:34.979114 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:30:34.979125 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:30:34.979136 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:30:34.979146 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:30:34.979157 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:30:34.979167 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:30:34.979178 | orchestrator | 2026-02-02 00:30:34.979189 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-02 00:30:34.979200 | orchestrator | Monday 02 February 2026 00:28:15 +0000 (0:00:00.592) 0:01:10.867 ******* 2026-02-02 00:30:34.979211 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:30:34.979271 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:30:34.979283 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:30:34.979294 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:30:34.979305 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:30:34.979315 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:30:34.979326 | orchestrator | ok: [testbed-manager] 2026-02-02 00:30:34.979337 | orchestrator | 2026-02-02 00:30:34.979348 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-02 00:30:34.979359 | orchestrator | Monday 02 February 2026 00:28:16 +0000 (0:00:00.254) 0:01:11.121 ******* 2026-02-02 00:30:34.979370 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:30:34.979382 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:30:34.979395 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:30:34.979408 | orchestrator | ok: [testbed-manager] 2026-02-02 00:30:34.979420 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:30:34.979433 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:30:34.979445 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:30:34.979458 | orchestrator | 2026-02-02 00:30:34.979470 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-02 00:30:34.979483 | orchestrator | Monday 02 February 2026 00:28:17 +0000 (0:00:01.267) 0:01:12.389 ******* 2026-02-02 00:30:34.979495 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:30:34.979508 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:30:34.979521 | orchestrator | changed: [testbed-manager] 2026-02-02 00:30:34.979534 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:30:34.979547 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:30:34.979559 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:30:34.979571 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:30:34.979584 | orchestrator | 2026-02-02 00:30:34.979597 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-02 00:30:34.979609 | orchestrator | Monday 02 February 2026 00:28:19 +0000 (0:00:01.878) 0:01:14.268 ******* 2026-02-02 00:30:34.979620 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:30:34.979631 | orchestrator | ok: [testbed-manager] 2026-02-02 00:30:34.979642 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:30:34.979653 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:30:34.979664 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:30:34.979674 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:30:34.979685 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:30:34.979696 | orchestrator | 2026-02-02 00:30:34.979706 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-02 00:30:34.979717 | orchestrator | Monday 02 February 2026 00:28:21 +0000 (0:00:02.732) 0:01:17.000 ******* 2026-02-02 00:30:34.979728 | orchestrator | ok: [testbed-manager] 2026-02-02 00:30:34.979743 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:30:34.979761 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:30:34.979792 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:30:34.979810 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:30:34.979827 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:30:34.979847 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:30:34.979864 | orchestrator | 2026-02-02 00:30:34.979882 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-02 00:30:34.979894 | orchestrator | Monday 02 February 2026 00:28:58 +0000 (0:00:36.736) 0:01:53.737 ******* 2026-02-02 00:30:34.979905 | orchestrator | changed: [testbed-manager] 2026-02-02 00:30:34.979916 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:30:34.979927 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:30:34.979937 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:30:34.979948 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:30:34.979959 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:30:34.979969 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:30:34.979980 | orchestrator | 2026-02-02 00:30:34.979995 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-02 00:30:34.980021 | orchestrator | Monday 02 February 2026 00:30:19 +0000 (0:01:20.383) 0:03:14.121 ******* 2026-02-02 00:30:34.980043 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:30:34.980061 | orchestrator | ok: [testbed-manager] 2026-02-02 00:30:34.980079 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:30:34.980097 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:30:34.980132 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:30:34.980148 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:30:34.980164 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:30:34.980180 | orchestrator | 2026-02-02 00:30:34.980196 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-02 00:30:34.980215 | orchestrator | Monday 02 February 2026 00:30:20 +0000 (0:00:01.846) 0:03:15.968 ******* 2026-02-02 00:30:34.980262 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:30:34.980279 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:30:34.980295 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:30:34.980312 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:30:34.980330 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:30:34.980347 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:30:34.980364 | orchestrator | changed: [testbed-manager] 2026-02-02 00:30:34.980384 | orchestrator | 2026-02-02 00:30:34.980402 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-02 00:30:34.980421 | orchestrator | Monday 02 February 2026 00:30:33 +0000 (0:00:12.881) 0:03:28.849 ******* 2026-02-02 00:30:34.980482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-02 00:30:34.980516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-02 00:30:34.980540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-02 00:30:34.980582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-02 00:30:34.980700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-02 00:30:34.980722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-02 00:30:34.980741 | orchestrator | 2026-02-02 00:30:34.980761 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-02 00:30:34.980778 | orchestrator | Monday 02 February 2026 00:30:34 +0000 (0:00:00.414) 0:03:29.264 ******* 2026-02-02 00:30:34.980794 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-02 00:30:34.980812 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:30:34.980830 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-02 00:30:34.980850 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-02 00:30:34.980870 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:30:34.980889 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:30:34.980907 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-02 00:30:34.980927 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:30:34.980945 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 00:30:34.980966 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 00:30:34.980987 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 00:30:34.981008 | orchestrator | 2026-02-02 00:30:34.981026 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-02 00:30:34.981045 | orchestrator | Monday 02 February 2026 00:30:34 +0000 (0:00:00.673) 0:03:29.938 ******* 2026-02-02 00:30:34.981063 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-02 00:30:34.981096 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-02 00:30:34.981116 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-02 00:30:34.981134 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-02 00:30:34.981152 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-02 00:30:34.981180 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-02 00:30:42.922506 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-02 00:30:42.922616 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-02 00:30:42.922632 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-02 00:30:42.922645 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-02 00:30:42.922657 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-02 00:30:42.922693 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-02 00:30:42.922705 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:30:42.922718 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-02 00:30:42.922729 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-02 00:30:42.922740 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-02 00:30:42.922750 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-02 00:30:42.922761 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-02 00:30:42.922772 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-02 00:30:42.922783 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-02 00:30:42.922794 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-02 00:30:42.922805 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-02 00:30:42.922815 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-02 00:30:42.922826 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-02 00:30:42.922837 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-02 00:30:42.922848 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-02 00:30:42.922858 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-02 00:30:42.922869 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-02 00:30:42.922880 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-02 00:30:42.922890 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-02 00:30:42.922901 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-02 00:30:42.922912 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:30:42.922923 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:30:42.922934 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-02 00:30:42.922945 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-02 00:30:42.922955 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-02 00:30:42.922966 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-02 00:30:42.922977 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-02 00:30:42.922987 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-02 00:30:42.922998 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-02 00:30:42.923009 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-02 00:30:42.923020 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-02 00:30:42.923046 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-02 00:30:42.923059 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:30:42.923073 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-02 00:30:42.923094 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-02 00:30:42.923107 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-02 00:30:42.923118 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-02 00:30:42.923129 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-02 00:30:42.923157 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-02 00:30:42.923168 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-02 00:30:42.923179 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-02 00:30:42.923190 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-02 00:30:42.923201 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-02 00:30:42.923268 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-02 00:30:42.923283 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-02 00:30:42.923294 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-02 00:30:42.923305 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-02 00:30:42.923316 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-02 00:30:42.923326 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-02 00:30:42.923337 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-02 00:30:42.923348 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-02 00:30:42.923359 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-02 00:30:42.923370 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-02 00:30:42.923381 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-02 00:30:42.923392 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-02 00:30:42.923403 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-02 00:30:42.923413 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-02 00:30:42.923424 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-02 00:30:42.923435 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-02 00:30:42.923446 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-02 00:30:42.923457 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-02 00:30:42.923468 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-02 00:30:42.923479 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-02 00:30:42.923490 | orchestrator | 2026-02-02 00:30:42.923501 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-02 00:30:42.923513 | orchestrator | Monday 02 February 2026 00:30:41 +0000 (0:00:06.918) 0:03:36.856 ******* 2026-02-02 00:30:42.923523 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-02 00:30:42.923543 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-02 00:30:42.923554 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-02 00:30:42.923564 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-02 00:30:42.923575 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-02 00:30:42.923586 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-02 00:30:42.923597 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-02 00:30:42.923607 | orchestrator | 2026-02-02 00:30:42.923618 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-02 00:30:42.923629 | orchestrator | Monday 02 February 2026 00:30:42 +0000 (0:00:00.643) 0:03:37.499 ******* 2026-02-02 00:30:42.923640 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 00:30:42.923657 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 00:30:42.923669 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:30:42.923679 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 00:30:42.923690 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:30:42.923701 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:30:42.923712 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 00:30:42.923723 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:30:42.923734 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-02 00:30:42.923744 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-02 00:30:42.923769 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-02 00:30:59.690666 | orchestrator | 2026-02-02 00:30:59.690780 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-02 00:30:59.690797 | orchestrator | Monday 02 February 2026 00:30:42 +0000 (0:00:00.477) 0:03:37.977 ******* 2026-02-02 00:30:59.690809 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 00:30:59.690821 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:30:59.690834 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 00:30:59.690845 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 00:30:59.690856 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:30:59.690867 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:30:59.690878 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 00:30:59.690888 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:30:59.690899 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-02 00:30:59.690910 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-02 00:30:59.690921 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-02 00:30:59.690932 | orchestrator | 2026-02-02 00:30:59.690943 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-02 00:30:59.690954 | orchestrator | Monday 02 February 2026 00:30:45 +0000 (0:00:02.533) 0:03:40.510 ******* 2026-02-02 00:30:59.690965 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-02 00:30:59.690976 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:30:59.691010 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-02 00:30:59.691022 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-02 00:30:59.691033 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:30:59.691043 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:30:59.691054 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-02 00:30:59.691065 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:30:59.691076 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-02 00:30:59.691086 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-02 00:30:59.691097 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-02 00:30:59.691108 | orchestrator | 2026-02-02 00:30:59.691119 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-02 00:30:59.691130 | orchestrator | Monday 02 February 2026 00:30:46 +0000 (0:00:00.567) 0:03:41.077 ******* 2026-02-02 00:30:59.691141 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:30:59.691152 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:30:59.691163 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:30:59.691174 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:30:59.691184 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:30:59.691195 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:30:59.691242 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:30:59.691256 | orchestrator | 2026-02-02 00:30:59.691270 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-02 00:30:59.691287 | orchestrator | Monday 02 February 2026 00:30:46 +0000 (0:00:00.332) 0:03:41.410 ******* 2026-02-02 00:30:59.691304 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:30:59.691317 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:30:59.691330 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:30:59.691343 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:30:59.691355 | orchestrator | ok: [testbed-manager] 2026-02-02 00:30:59.691367 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:30:59.691381 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:30:59.691393 | orchestrator | 2026-02-02 00:30:59.691406 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-02 00:30:59.691419 | orchestrator | Monday 02 February 2026 00:30:51 +0000 (0:00:05.429) 0:03:46.839 ******* 2026-02-02 00:30:59.691432 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-02 00:30:59.691444 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-02 00:30:59.691457 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:30:59.691469 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-02 00:30:59.691482 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:30:59.691494 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-02 00:30:59.691507 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:30:59.691520 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-02 00:30:59.691533 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:30:59.691546 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-02 00:30:59.691560 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:30:59.691571 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:30:59.691582 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-02 00:30:59.691593 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:30:59.691612 | orchestrator | 2026-02-02 00:30:59.691625 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-02 00:30:59.691636 | orchestrator | Monday 02 February 2026 00:30:52 +0000 (0:00:00.309) 0:03:47.149 ******* 2026-02-02 00:30:59.691646 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-02 00:30:59.691657 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-02 00:30:59.691677 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-02 00:30:59.691705 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-02 00:30:59.691717 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-02 00:30:59.691728 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-02 00:30:59.691738 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-02 00:30:59.691749 | orchestrator | 2026-02-02 00:30:59.691760 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-02 00:30:59.691771 | orchestrator | Monday 02 February 2026 00:30:53 +0000 (0:00:01.202) 0:03:48.351 ******* 2026-02-02 00:30:59.691784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:30:59.691797 | orchestrator | 2026-02-02 00:30:59.691808 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-02 00:30:59.691819 | orchestrator | Monday 02 February 2026 00:30:53 +0000 (0:00:00.492) 0:03:48.843 ******* 2026-02-02 00:30:59.691830 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:30:59.691841 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:30:59.691852 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:30:59.691862 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:30:59.691873 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:30:59.691884 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:30:59.691894 | orchestrator | ok: [testbed-manager] 2026-02-02 00:30:59.691905 | orchestrator | 2026-02-02 00:30:59.691916 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-02 00:30:59.691927 | orchestrator | Monday 02 February 2026 00:30:56 +0000 (0:00:02.266) 0:03:51.110 ******* 2026-02-02 00:30:59.691937 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:30:59.691948 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:30:59.691959 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:30:59.691969 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:30:59.691980 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:30:59.691990 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:30:59.692001 | orchestrator | ok: [testbed-manager] 2026-02-02 00:30:59.692012 | orchestrator | 2026-02-02 00:30:59.692023 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-02 00:30:59.692034 | orchestrator | Monday 02 February 2026 00:30:57 +0000 (0:00:01.661) 0:03:52.772 ******* 2026-02-02 00:30:59.692044 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:30:59.692055 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:30:59.692066 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:30:59.692077 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:30:59.692087 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:30:59.692098 | orchestrator | changed: [testbed-manager] 2026-02-02 00:30:59.692108 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:30:59.692119 | orchestrator | 2026-02-02 00:30:59.692130 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-02 00:30:59.692141 | orchestrator | Monday 02 February 2026 00:30:58 +0000 (0:00:00.737) 0:03:53.510 ******* 2026-02-02 00:30:59.692152 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:30:59.692163 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:30:59.692173 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:30:59.692184 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:30:59.692195 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:30:59.692426 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:30:59.692458 | orchestrator | ok: [testbed-manager] 2026-02-02 00:30:59.692470 | orchestrator | 2026-02-02 00:30:59.692481 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-02 00:30:59.692492 | orchestrator | Monday 02 February 2026 00:30:59 +0000 (0:00:00.618) 0:03:54.128 ******* 2026-02-02 00:30:59.692507 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769990744.1110945, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 00:30:59.692535 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769990758.815242, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 00:30:59.692548 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769990737.7835743, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 00:30:59.692571 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769990723.25942, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 00:31:05.154931 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769990746.2959704, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 00:31:05.155045 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769990717.4200757, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 00:31:05.155065 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769990720.5369618, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 00:31:05.155080 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 00:31:05.155119 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 00:31:05.155149 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 00:31:05.155162 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 00:31:05.155254 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 00:31:05.155272 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 00:31:05.155286 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 00:31:05.155300 | orchestrator | 2026-02-02 00:31:05.155315 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-02 00:31:05.155342 | orchestrator | Monday 02 February 2026 00:31:00 +0000 (0:00:01.063) 0:03:55.191 ******* 2026-02-02 00:31:05.155356 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:31:05.155371 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:31:05.155382 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:31:05.155393 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:31:05.155406 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:31:05.155418 | orchestrator | changed: [testbed-manager] 2026-02-02 00:31:05.155431 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:31:05.155444 | orchestrator | 2026-02-02 00:31:05.155457 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-02 00:31:05.155470 | orchestrator | Monday 02 February 2026 00:31:01 +0000 (0:00:01.172) 0:03:56.364 ******* 2026-02-02 00:31:05.155484 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:31:05.155498 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:31:05.155511 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:31:05.155524 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:31:05.155532 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:31:05.155540 | orchestrator | changed: [testbed-manager] 2026-02-02 00:31:05.155548 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:31:05.155556 | orchestrator | 2026-02-02 00:31:05.155564 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-02 00:31:05.155572 | orchestrator | Monday 02 February 2026 00:31:02 +0000 (0:00:01.221) 0:03:57.585 ******* 2026-02-02 00:31:05.155580 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:31:05.155588 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:31:05.155596 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:31:05.155603 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:31:05.155611 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:31:05.155619 | orchestrator | changed: [testbed-manager] 2026-02-02 00:31:05.155627 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:31:05.155635 | orchestrator | 2026-02-02 00:31:05.155650 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-02 00:31:05.155658 | orchestrator | Monday 02 February 2026 00:31:03 +0000 (0:00:01.105) 0:03:58.691 ******* 2026-02-02 00:31:05.155666 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:31:05.155674 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:31:05.155682 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:31:05.155690 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:31:05.155698 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:31:05.155706 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:31:05.155713 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:31:05.155721 | orchestrator | 2026-02-02 00:31:05.155729 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-02 00:31:05.155737 | orchestrator | Monday 02 February 2026 00:31:03 +0000 (0:00:00.295) 0:03:58.986 ******* 2026-02-02 00:31:05.155745 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:31:05.155754 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:31:05.155762 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:31:05.155769 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:31:05.155777 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:31:05.155785 | orchestrator | ok: [testbed-manager] 2026-02-02 00:31:05.155793 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:31:05.155800 | orchestrator | 2026-02-02 00:31:05.155808 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-02 00:31:05.155816 | orchestrator | Monday 02 February 2026 00:31:04 +0000 (0:00:00.750) 0:03:59.737 ******* 2026-02-02 00:31:05.155826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:31:05.155836 | orchestrator | 2026-02-02 00:31:05.155844 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-02 00:31:05.155862 | orchestrator | Monday 02 February 2026 00:31:05 +0000 (0:00:00.446) 0:04:00.183 ******* 2026-02-02 00:32:23.599767 | orchestrator | ok: [testbed-manager] 2026-02-02 00:32:23.599875 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:32:23.599891 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:32:23.599902 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:32:23.599912 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:32:23.599923 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:32:23.599933 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:32:23.599944 | orchestrator | 2026-02-02 00:32:23.599955 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-02 00:32:23.599967 | orchestrator | Monday 02 February 2026 00:31:13 +0000 (0:00:08.323) 0:04:08.506 ******* 2026-02-02 00:32:23.599977 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:32:23.599987 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:32:23.599996 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:32:23.600006 | orchestrator | ok: [testbed-manager] 2026-02-02 00:32:23.600016 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:32:23.600026 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:32:23.600036 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:32:23.600045 | orchestrator | 2026-02-02 00:32:23.600055 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-02 00:32:23.600065 | orchestrator | Monday 02 February 2026 00:31:14 +0000 (0:00:01.485) 0:04:09.992 ******* 2026-02-02 00:32:23.600075 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:32:23.600085 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:32:23.600095 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:32:23.600104 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:32:23.600114 | orchestrator | ok: [testbed-manager] 2026-02-02 00:32:23.600124 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:32:23.600134 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:32:23.600143 | orchestrator | 2026-02-02 00:32:23.600153 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-02 00:32:23.600163 | orchestrator | Monday 02 February 2026 00:31:16 +0000 (0:00:01.236) 0:04:11.228 ******* 2026-02-02 00:32:23.600248 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:32:23.600258 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:32:23.600268 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:32:23.600277 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:32:23.600287 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:32:23.600297 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:32:23.600307 | orchestrator | ok: [testbed-manager] 2026-02-02 00:32:23.600319 | orchestrator | 2026-02-02 00:32:23.600331 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-02 00:32:23.600344 | orchestrator | Monday 02 February 2026 00:31:16 +0000 (0:00:00.285) 0:04:11.514 ******* 2026-02-02 00:32:23.600355 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:32:23.600366 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:32:23.600377 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:32:23.600389 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:32:23.600400 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:32:23.600411 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:32:23.600422 | orchestrator | ok: [testbed-manager] 2026-02-02 00:32:23.600433 | orchestrator | 2026-02-02 00:32:23.600445 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-02 00:32:23.600457 | orchestrator | Monday 02 February 2026 00:31:16 +0000 (0:00:00.325) 0:04:11.839 ******* 2026-02-02 00:32:23.600469 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:32:23.600480 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:32:23.600490 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:32:23.600501 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:32:23.600512 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:32:23.600523 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:32:23.600534 | orchestrator | ok: [testbed-manager] 2026-02-02 00:32:23.600546 | orchestrator | 2026-02-02 00:32:23.600557 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-02 00:32:23.600594 | orchestrator | Monday 02 February 2026 00:31:17 +0000 (0:00:00.329) 0:04:12.168 ******* 2026-02-02 00:32:23.600606 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:32:23.600617 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:32:23.600628 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:32:23.600640 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:32:23.600651 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:32:23.600664 | orchestrator | ok: [testbed-manager] 2026-02-02 00:32:23.600674 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:32:23.600684 | orchestrator | 2026-02-02 00:32:23.600694 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-02 00:32:23.600704 | orchestrator | Monday 02 February 2026 00:31:22 +0000 (0:00:05.508) 0:04:17.677 ******* 2026-02-02 00:32:23.600716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:32:23.600728 | orchestrator | 2026-02-02 00:32:23.600738 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-02 00:32:23.600748 | orchestrator | Monday 02 February 2026 00:31:23 +0000 (0:00:00.450) 0:04:18.127 ******* 2026-02-02 00:32:23.600757 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-02 00:32:23.600767 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-02 00:32:23.600777 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-02 00:32:23.600787 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:32:23.600797 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-02 00:32:23.600806 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-02 00:32:23.600816 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:32:23.600825 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-02 00:32:23.600835 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-02 00:32:23.600845 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:32:23.600855 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-02 00:32:23.600864 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:32:23.600874 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-02 00:32:23.600883 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-02 00:32:23.600893 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-02 00:32:23.600903 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-02 00:32:23.600930 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:32:23.600941 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:32:23.600951 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-02 00:32:23.600961 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-02 00:32:23.600970 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:32:23.600980 | orchestrator | 2026-02-02 00:32:23.600990 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-02 00:32:23.601000 | orchestrator | Monday 02 February 2026 00:31:23 +0000 (0:00:00.383) 0:04:18.511 ******* 2026-02-02 00:32:23.601010 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:32:23.601020 | orchestrator | 2026-02-02 00:32:23.601030 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-02 00:32:23.601039 | orchestrator | Monday 02 February 2026 00:31:23 +0000 (0:00:00.431) 0:04:18.942 ******* 2026-02-02 00:32:23.601049 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-02 00:32:23.601059 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-02 00:32:23.601069 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:32:23.601086 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-02 00:32:23.601095 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:32:23.601105 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-02 00:32:23.601114 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:32:23.601124 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-02 00:32:23.601133 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:32:23.601143 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-02 00:32:23.601152 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:32:23.601162 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:32:23.601191 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-02 00:32:23.601201 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:32:23.601211 | orchestrator | 2026-02-02 00:32:23.601221 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-02 00:32:23.601231 | orchestrator | Monday 02 February 2026 00:31:24 +0000 (0:00:00.329) 0:04:19.272 ******* 2026-02-02 00:32:23.601241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:32:23.601251 | orchestrator | 2026-02-02 00:32:23.601278 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-02 00:32:23.601288 | orchestrator | Monday 02 February 2026 00:31:24 +0000 (0:00:00.457) 0:04:19.730 ******* 2026-02-02 00:32:23.601298 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:32:23.601308 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:32:23.601317 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:32:23.601327 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:32:23.601336 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:32:23.601346 | orchestrator | changed: [testbed-manager] 2026-02-02 00:32:23.601356 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:32:23.601365 | orchestrator | 2026-02-02 00:32:23.601375 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-02 00:32:23.601385 | orchestrator | Monday 02 February 2026 00:31:58 +0000 (0:00:34.075) 0:04:53.805 ******* 2026-02-02 00:32:23.601394 | orchestrator | changed: [testbed-manager] 2026-02-02 00:32:23.601404 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:32:23.601413 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:32:23.601423 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:32:23.601433 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:32:23.601446 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:32:23.601456 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:32:23.601466 | orchestrator | 2026-02-02 00:32:23.601476 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-02 00:32:23.601486 | orchestrator | Monday 02 February 2026 00:32:07 +0000 (0:00:08.282) 0:05:02.087 ******* 2026-02-02 00:32:23.601495 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:32:23.601505 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:32:23.601514 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:32:23.601524 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:32:23.601533 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:32:23.601543 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:32:23.601552 | orchestrator | changed: [testbed-manager] 2026-02-02 00:32:23.601562 | orchestrator | 2026-02-02 00:32:23.601572 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-02 00:32:23.601582 | orchestrator | Monday 02 February 2026 00:32:15 +0000 (0:00:08.316) 0:05:10.404 ******* 2026-02-02 00:32:23.601591 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:32:23.601601 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:32:23.601610 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:32:23.601620 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:32:23.601636 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:32:23.601646 | orchestrator | ok: [testbed-manager] 2026-02-02 00:32:23.601655 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:32:23.601665 | orchestrator | 2026-02-02 00:32:23.601674 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-02 00:32:23.601684 | orchestrator | Monday 02 February 2026 00:32:17 +0000 (0:00:01.937) 0:05:12.342 ******* 2026-02-02 00:32:23.601694 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:32:23.601704 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:32:23.601713 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:32:23.601723 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:32:23.601732 | orchestrator | changed: [testbed-manager] 2026-02-02 00:32:23.601742 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:32:23.601751 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:32:23.601761 | orchestrator | 2026-02-02 00:32:23.601777 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-02 00:32:35.370151 | orchestrator | Monday 02 February 2026 00:32:23 +0000 (0:00:06.286) 0:05:18.629 ******* 2026-02-02 00:32:35.370357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:32:35.370375 | orchestrator | 2026-02-02 00:32:35.370385 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-02 00:32:35.370395 | orchestrator | Monday 02 February 2026 00:32:24 +0000 (0:00:00.445) 0:05:19.075 ******* 2026-02-02 00:32:35.370404 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:32:35.370413 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:32:35.370422 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:32:35.370431 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:32:35.370440 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:32:35.370450 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:32:35.370458 | orchestrator | changed: [testbed-manager] 2026-02-02 00:32:35.370466 | orchestrator | 2026-02-02 00:32:35.370475 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-02 00:32:35.370485 | orchestrator | Monday 02 February 2026 00:32:24 +0000 (0:00:00.759) 0:05:19.835 ******* 2026-02-02 00:32:35.370493 | orchestrator | ok: [testbed-manager] 2026-02-02 00:32:35.370503 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:32:35.370512 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:32:35.370521 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:32:35.370530 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:32:35.370539 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:32:35.370547 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:32:35.370556 | orchestrator | 2026-02-02 00:32:35.370565 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-02 00:32:35.370574 | orchestrator | Monday 02 February 2026 00:32:26 +0000 (0:00:01.830) 0:05:21.665 ******* 2026-02-02 00:32:35.370583 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:32:35.370592 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:32:35.370601 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:32:35.370610 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:32:35.370619 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:32:35.370628 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:32:35.370637 | orchestrator | changed: [testbed-manager] 2026-02-02 00:32:35.370647 | orchestrator | 2026-02-02 00:32:35.370656 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-02 00:32:35.370664 | orchestrator | Monday 02 February 2026 00:32:27 +0000 (0:00:00.825) 0:05:22.490 ******* 2026-02-02 00:32:35.370673 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:32:35.370681 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:32:35.370689 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:32:35.370697 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:32:35.370705 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:32:35.370736 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:32:35.370745 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:32:35.370753 | orchestrator | 2026-02-02 00:32:35.370760 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-02 00:32:35.370767 | orchestrator | Monday 02 February 2026 00:32:27 +0000 (0:00:00.275) 0:05:22.765 ******* 2026-02-02 00:32:35.370775 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:32:35.370782 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:32:35.370791 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:32:35.370799 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:32:35.370806 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:32:35.370814 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:32:35.370822 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:32:35.370830 | orchestrator | 2026-02-02 00:32:35.370838 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-02 00:32:35.370846 | orchestrator | Monday 02 February 2026 00:32:28 +0000 (0:00:00.408) 0:05:23.174 ******* 2026-02-02 00:32:35.370853 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:32:35.370860 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:32:35.370868 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:32:35.370876 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:32:35.370898 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:32:35.370906 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:32:35.370914 | orchestrator | ok: [testbed-manager] 2026-02-02 00:32:35.370922 | orchestrator | 2026-02-02 00:32:35.370930 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-02 00:32:35.370938 | orchestrator | Monday 02 February 2026 00:32:28 +0000 (0:00:00.291) 0:05:23.465 ******* 2026-02-02 00:32:35.370946 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:32:35.370954 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:32:35.370961 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:32:35.370970 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:32:35.370978 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:32:35.370986 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:32:35.370994 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:32:35.371003 | orchestrator | 2026-02-02 00:32:35.371012 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-02 00:32:35.371021 | orchestrator | Monday 02 February 2026 00:32:28 +0000 (0:00:00.301) 0:05:23.767 ******* 2026-02-02 00:32:35.371029 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:32:35.371037 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:32:35.371046 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:32:35.371054 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:32:35.371062 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:32:35.371070 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:32:35.371077 | orchestrator | ok: [testbed-manager] 2026-02-02 00:32:35.371085 | orchestrator | 2026-02-02 00:32:35.371093 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-02 00:32:35.371101 | orchestrator | Monday 02 February 2026 00:32:29 +0000 (0:00:00.317) 0:05:24.085 ******* 2026-02-02 00:32:35.371109 | orchestrator | ok: [testbed-node-3] =>  2026-02-02 00:32:35.371117 | orchestrator |  docker_version: 5:27.5.1 2026-02-02 00:32:35.371125 | orchestrator | ok: [testbed-node-4] =>  2026-02-02 00:32:35.371133 | orchestrator |  docker_version: 5:27.5.1 2026-02-02 00:32:35.371141 | orchestrator | ok: [testbed-node-5] =>  2026-02-02 00:32:35.371149 | orchestrator |  docker_version: 5:27.5.1 2026-02-02 00:32:35.371173 | orchestrator | ok: [testbed-node-0] =>  2026-02-02 00:32:35.371181 | orchestrator |  docker_version: 5:27.5.1 2026-02-02 00:32:35.371207 | orchestrator | ok: [testbed-node-1] =>  2026-02-02 00:32:35.371216 | orchestrator |  docker_version: 5:27.5.1 2026-02-02 00:32:35.371224 | orchestrator | ok: [testbed-node-2] =>  2026-02-02 00:32:35.371232 | orchestrator |  docker_version: 5:27.5.1 2026-02-02 00:32:35.371240 | orchestrator | ok: [testbed-manager] =>  2026-02-02 00:32:35.371248 | orchestrator |  docker_version: 5:27.5.1 2026-02-02 00:32:35.371265 | orchestrator | 2026-02-02 00:32:35.371273 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-02 00:32:35.371281 | orchestrator | Monday 02 February 2026 00:32:29 +0000 (0:00:00.329) 0:05:24.415 ******* 2026-02-02 00:32:35.371288 | orchestrator | ok: [testbed-node-3] =>  2026-02-02 00:32:35.371295 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-02 00:32:35.371304 | orchestrator | ok: [testbed-node-4] =>  2026-02-02 00:32:35.371311 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-02 00:32:35.371319 | orchestrator | ok: [testbed-node-5] =>  2026-02-02 00:32:35.371328 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-02 00:32:35.371336 | orchestrator | ok: [testbed-node-0] =>  2026-02-02 00:32:35.371344 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-02 00:32:35.371352 | orchestrator | ok: [testbed-node-1] =>  2026-02-02 00:32:35.371360 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-02 00:32:35.371367 | orchestrator | ok: [testbed-node-2] =>  2026-02-02 00:32:35.371375 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-02 00:32:35.371383 | orchestrator | ok: [testbed-manager] =>  2026-02-02 00:32:35.371390 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-02 00:32:35.371398 | orchestrator | 2026-02-02 00:32:35.371407 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-02 00:32:35.371415 | orchestrator | Monday 02 February 2026 00:32:29 +0000 (0:00:00.320) 0:05:24.736 ******* 2026-02-02 00:32:35.371423 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:32:35.371431 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:32:35.371439 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:32:35.371447 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:32:35.371455 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:32:35.371463 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:32:35.371471 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:32:35.371479 | orchestrator | 2026-02-02 00:32:35.371487 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-02 00:32:35.371495 | orchestrator | Monday 02 February 2026 00:32:29 +0000 (0:00:00.278) 0:05:25.014 ******* 2026-02-02 00:32:35.371503 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:32:35.371511 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:32:35.371519 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:32:35.371527 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:32:35.371534 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:32:35.371542 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:32:35.371550 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:32:35.371558 | orchestrator | 2026-02-02 00:32:35.371566 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-02 00:32:35.371574 | orchestrator | Monday 02 February 2026 00:32:30 +0000 (0:00:00.305) 0:05:25.319 ******* 2026-02-02 00:32:35.371583 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:32:35.371592 | orchestrator | 2026-02-02 00:32:35.371599 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-02 00:32:35.371607 | orchestrator | Monday 02 February 2026 00:32:30 +0000 (0:00:00.573) 0:05:25.893 ******* 2026-02-02 00:32:35.371614 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:32:35.371621 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:32:35.371630 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:32:35.371637 | orchestrator | ok: [testbed-manager] 2026-02-02 00:32:35.371645 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:32:35.371653 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:32:35.371660 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:32:35.371669 | orchestrator | 2026-02-02 00:32:35.371677 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-02 00:32:35.371691 | orchestrator | Monday 02 February 2026 00:32:31 +0000 (0:00:00.866) 0:05:26.760 ******* 2026-02-02 00:32:35.371705 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:32:35.371713 | orchestrator | ok: [testbed-manager] 2026-02-02 00:32:35.371721 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:32:35.371729 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:32:35.371736 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:32:35.371744 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:32:35.371751 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:32:35.371759 | orchestrator | 2026-02-02 00:32:35.371767 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-02 00:32:35.371777 | orchestrator | Monday 02 February 2026 00:32:34 +0000 (0:00:03.233) 0:05:29.994 ******* 2026-02-02 00:32:35.371785 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-02 00:32:35.371793 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-02 00:32:35.371801 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-02 00:32:35.371809 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-02 00:32:35.371817 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-02 00:32:35.371824 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-02 00:32:35.371832 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:32:35.371840 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-02 00:32:35.371848 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-02 00:32:35.371856 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-02 00:32:35.371864 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:32:35.371872 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-02 00:32:35.371880 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-02 00:32:35.371888 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-02 00:32:35.371896 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:32:35.371904 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-02 00:32:35.371918 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-02 00:33:38.590929 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-02 00:33:38.591036 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:33:38.591051 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-02 00:33:38.591062 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-02 00:33:38.591072 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-02 00:33:38.591082 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:33:38.591092 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:33:38.591102 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-02 00:33:38.591111 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-02 00:33:38.591121 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-02 00:33:38.591160 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:33:38.591171 | orchestrator | 2026-02-02 00:33:38.591182 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-02 00:33:38.591194 | orchestrator | Monday 02 February 2026 00:32:35 +0000 (0:00:00.658) 0:05:30.652 ******* 2026-02-02 00:33:38.591204 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:38.591214 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:38.591223 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:38.591233 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:38.591243 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:38.591252 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:38.591262 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:38.591272 | orchestrator | 2026-02-02 00:33:38.591281 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-02 00:33:38.591291 | orchestrator | Monday 02 February 2026 00:32:42 +0000 (0:00:06.744) 0:05:37.397 ******* 2026-02-02 00:33:38.591301 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:38.591335 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:38.591345 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:38.591355 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:38.591364 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:38.591374 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:38.591383 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:38.591392 | orchestrator | 2026-02-02 00:33:38.591402 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-02 00:33:38.591412 | orchestrator | Monday 02 February 2026 00:32:43 +0000 (0:00:01.179) 0:05:38.577 ******* 2026-02-02 00:33:38.591421 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:38.591431 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:38.591440 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:38.591449 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:38.591459 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:38.591468 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:38.591479 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:38.591491 | orchestrator | 2026-02-02 00:33:38.591503 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-02 00:33:38.591515 | orchestrator | Monday 02 February 2026 00:32:52 +0000 (0:00:08.575) 0:05:47.153 ******* 2026-02-02 00:33:38.591527 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:38.591538 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:38.591549 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:38.591561 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:38.591572 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:38.591584 | orchestrator | changed: [testbed-manager] 2026-02-02 00:33:38.591595 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:38.591606 | orchestrator | 2026-02-02 00:33:38.591617 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-02 00:33:38.591629 | orchestrator | Monday 02 February 2026 00:32:55 +0000 (0:00:03.433) 0:05:50.586 ******* 2026-02-02 00:33:38.591640 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:38.591651 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:38.591663 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:38.591674 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:38.591685 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:38.591697 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:38.591708 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:38.591719 | orchestrator | 2026-02-02 00:33:38.591745 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-02 00:33:38.591757 | orchestrator | Monday 02 February 2026 00:32:56 +0000 (0:00:01.390) 0:05:51.976 ******* 2026-02-02 00:33:38.591769 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:38.591781 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:38.591793 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:38.591804 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:38.591815 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:38.591826 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:38.591838 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:38.591849 | orchestrator | 2026-02-02 00:33:38.591859 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-02 00:33:38.591869 | orchestrator | Monday 02 February 2026 00:32:58 +0000 (0:00:01.484) 0:05:53.461 ******* 2026-02-02 00:33:38.591879 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:33:38.591889 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:33:38.591898 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:33:38.591908 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:33:38.591917 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:33:38.591927 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:33:38.591936 | orchestrator | changed: [testbed-manager] 2026-02-02 00:33:38.591946 | orchestrator | 2026-02-02 00:33:38.591956 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-02 00:33:38.591973 | orchestrator | Monday 02 February 2026 00:32:59 +0000 (0:00:01.060) 0:05:54.522 ******* 2026-02-02 00:33:38.591983 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:38.591992 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:38.592002 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:38.592011 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:38.592020 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:38.592030 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:38.592040 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:38.592049 | orchestrator | 2026-02-02 00:33:38.592059 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-02 00:33:38.592083 | orchestrator | Monday 02 February 2026 00:33:09 +0000 (0:00:09.727) 0:06:04.249 ******* 2026-02-02 00:33:38.592094 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:38.592103 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:38.592113 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:38.592143 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:38.592160 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:38.592175 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:38.592193 | orchestrator | changed: [testbed-manager] 2026-02-02 00:33:38.592208 | orchestrator | 2026-02-02 00:33:38.592224 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-02 00:33:38.592236 | orchestrator | Monday 02 February 2026 00:33:10 +0000 (0:00:00.890) 0:06:05.140 ******* 2026-02-02 00:33:38.592246 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:38.592255 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:38.592265 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:38.592274 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:38.592284 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:38.592293 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:38.592302 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:38.592312 | orchestrator | 2026-02-02 00:33:38.592322 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-02 00:33:38.592331 | orchestrator | Monday 02 February 2026 00:33:19 +0000 (0:00:09.508) 0:06:14.649 ******* 2026-02-02 00:33:38.592341 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:38.592350 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:38.592360 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:38.592369 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:38.592379 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:38.592388 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:38.592397 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:38.592407 | orchestrator | 2026-02-02 00:33:38.592417 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-02 00:33:38.592426 | orchestrator | Monday 02 February 2026 00:33:31 +0000 (0:00:11.980) 0:06:26.629 ******* 2026-02-02 00:33:38.592436 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-02 00:33:38.592446 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-02 00:33:38.592456 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-02 00:33:38.592467 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-02 00:33:38.592478 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-02 00:33:38.592489 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-02 00:33:38.592499 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-02 00:33:38.592510 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-02 00:33:38.592521 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-02 00:33:38.592531 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-02 00:33:38.592542 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-02 00:33:38.592553 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-02 00:33:38.592564 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-02 00:33:38.592574 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-02 00:33:38.592593 | orchestrator | 2026-02-02 00:33:38.592604 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-02 00:33:38.592615 | orchestrator | Monday 02 February 2026 00:33:32 +0000 (0:00:01.206) 0:06:27.835 ******* 2026-02-02 00:33:38.592626 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:33:38.592637 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:33:38.592648 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:33:38.592658 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:33:38.592669 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:33:38.592679 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:33:38.592690 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:33:38.592701 | orchestrator | 2026-02-02 00:33:38.592711 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-02 00:33:38.592722 | orchestrator | Monday 02 February 2026 00:33:33 +0000 (0:00:00.546) 0:06:28.382 ******* 2026-02-02 00:33:38.592733 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:38.592744 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:38.592755 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:38.592766 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:38.592776 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:38.592787 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:38.592798 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:38.592808 | orchestrator | 2026-02-02 00:33:38.592820 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-02 00:33:38.592832 | orchestrator | Monday 02 February 2026 00:33:37 +0000 (0:00:04.249) 0:06:32.632 ******* 2026-02-02 00:33:38.592843 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:33:38.592854 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:33:38.592865 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:33:38.592876 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:33:38.592887 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:33:38.592897 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:33:38.592908 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:33:38.592919 | orchestrator | 2026-02-02 00:33:38.592930 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-02 00:33:38.592941 | orchestrator | Monday 02 February 2026 00:33:38 +0000 (0:00:00.699) 0:06:33.331 ******* 2026-02-02 00:33:38.592952 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-02 00:33:38.592963 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-02 00:33:38.592974 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:33:38.592985 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-02 00:33:38.592996 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-02 00:33:38.593006 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:33:38.593017 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-02 00:33:38.593031 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-02 00:33:38.593072 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:33:38.593104 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-02 00:33:58.808349 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-02 00:33:58.808468 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:33:58.808484 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-02 00:33:58.808492 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-02 00:33:58.808500 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:33:58.808508 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-02 00:33:58.808557 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-02 00:33:58.808566 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:33:58.808573 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-02 00:33:58.808601 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-02 00:33:58.808609 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:33:58.808616 | orchestrator | 2026-02-02 00:33:58.808625 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-02 00:33:58.808634 | orchestrator | Monday 02 February 2026 00:33:38 +0000 (0:00:00.590) 0:06:33.922 ******* 2026-02-02 00:33:58.808641 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:33:58.808648 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:33:58.808656 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:33:58.808663 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:33:58.808670 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:33:58.808677 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:33:58.808684 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:33:58.808691 | orchestrator | 2026-02-02 00:33:58.808699 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-02 00:33:58.808707 | orchestrator | Monday 02 February 2026 00:33:39 +0000 (0:00:00.540) 0:06:34.462 ******* 2026-02-02 00:33:58.808714 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:33:58.808721 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:33:58.808728 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:33:58.808735 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:33:58.808743 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:33:58.808750 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:33:58.808757 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:33:58.808764 | orchestrator | 2026-02-02 00:33:58.808771 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-02 00:33:58.808779 | orchestrator | Monday 02 February 2026 00:33:39 +0000 (0:00:00.526) 0:06:34.989 ******* 2026-02-02 00:33:58.808786 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:33:58.808793 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:33:58.808800 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:33:58.808807 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:33:58.808814 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:33:58.808821 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:33:58.808828 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:33:58.808835 | orchestrator | 2026-02-02 00:33:58.808842 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-02 00:33:58.808850 | orchestrator | Monday 02 February 2026 00:33:40 +0000 (0:00:00.543) 0:06:35.532 ******* 2026-02-02 00:33:58.808857 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:33:58.808864 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:33:58.808871 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:33:58.808878 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:33:58.808885 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:33:58.808894 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:58.808902 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:33:58.808910 | orchestrator | 2026-02-02 00:33:58.808919 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-02 00:33:58.808927 | orchestrator | Monday 02 February 2026 00:33:42 +0000 (0:00:01.983) 0:06:37.515 ******* 2026-02-02 00:33:58.808940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:33:58.808951 | orchestrator | 2026-02-02 00:33:58.808959 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-02 00:33:58.808968 | orchestrator | Monday 02 February 2026 00:33:43 +0000 (0:00:00.912) 0:06:38.428 ******* 2026-02-02 00:33:58.808976 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:58.808985 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:58.808994 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:58.809002 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:58.809011 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:58.809025 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:58.809034 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:58.809043 | orchestrator | 2026-02-02 00:33:58.809052 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-02 00:33:58.809060 | orchestrator | Monday 02 February 2026 00:33:44 +0000 (0:00:00.843) 0:06:39.271 ******* 2026-02-02 00:33:58.809069 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:58.809078 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:58.809087 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:58.809096 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:58.809104 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:58.809137 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:58.809148 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:58.809155 | orchestrator | 2026-02-02 00:33:58.809163 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-02 00:33:58.809170 | orchestrator | Monday 02 February 2026 00:33:45 +0000 (0:00:00.875) 0:06:40.146 ******* 2026-02-02 00:33:58.809177 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:58.809185 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:58.809192 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:58.809199 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:58.809206 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:58.809213 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:58.809220 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:58.809227 | orchestrator | 2026-02-02 00:33:58.809234 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-02 00:33:58.809256 | orchestrator | Monday 02 February 2026 00:33:46 +0000 (0:00:01.546) 0:06:41.693 ******* 2026-02-02 00:33:58.809263 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:33:58.809271 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:33:58.809278 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:33:58.809285 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:33:58.809292 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:33:58.809299 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:33:58.809306 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:33:58.809313 | orchestrator | 2026-02-02 00:33:58.809321 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-02 00:33:58.809328 | orchestrator | Monday 02 February 2026 00:33:48 +0000 (0:00:01.469) 0:06:43.163 ******* 2026-02-02 00:33:58.809335 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:58.809343 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:58.809350 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:58.809357 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:58.809369 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:58.809381 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:58.809393 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:58.809405 | orchestrator | 2026-02-02 00:33:58.809416 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-02 00:33:58.809430 | orchestrator | Monday 02 February 2026 00:33:49 +0000 (0:00:01.440) 0:06:44.603 ******* 2026-02-02 00:33:58.809442 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:33:58.809455 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:33:58.809463 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:33:58.809470 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:33:58.809477 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:33:58.809484 | orchestrator | changed: [testbed-manager] 2026-02-02 00:33:58.809491 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:33:58.809498 | orchestrator | 2026-02-02 00:33:58.809505 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-02 00:33:58.809513 | orchestrator | Monday 02 February 2026 00:33:51 +0000 (0:00:01.499) 0:06:46.103 ******* 2026-02-02 00:33:58.809520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:33:58.809538 | orchestrator | 2026-02-02 00:33:58.809546 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-02 00:33:58.809553 | orchestrator | Monday 02 February 2026 00:33:52 +0000 (0:00:01.104) 0:06:47.207 ******* 2026-02-02 00:33:58.809560 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:33:58.809567 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:33:58.809574 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:33:58.809581 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:33:58.809588 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:33:58.809595 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:33:58.809602 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:58.809609 | orchestrator | 2026-02-02 00:33:58.809616 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-02 00:33:58.809624 | orchestrator | Monday 02 February 2026 00:33:53 +0000 (0:00:01.551) 0:06:48.758 ******* 2026-02-02 00:33:58.809631 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:33:58.809638 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:33:58.809645 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:33:58.809652 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:33:58.809659 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:33:58.809666 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:33:58.809673 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:58.809680 | orchestrator | 2026-02-02 00:33:58.809687 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-02 00:33:58.809694 | orchestrator | Monday 02 February 2026 00:33:54 +0000 (0:00:01.266) 0:06:50.025 ******* 2026-02-02 00:33:58.809701 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:33:58.809708 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:33:58.809715 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:33:58.809722 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:33:58.809729 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:33:58.809736 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:33:58.809744 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:58.809751 | orchestrator | 2026-02-02 00:33:58.809758 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-02 00:33:58.809765 | orchestrator | Monday 02 February 2026 00:33:56 +0000 (0:00:01.233) 0:06:51.259 ******* 2026-02-02 00:33:58.809772 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:33:58.809779 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:33:58.809786 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:33:58.809793 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:33:58.809800 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:33:58.809807 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:33:58.809815 | orchestrator | ok: [testbed-manager] 2026-02-02 00:33:58.809822 | orchestrator | 2026-02-02 00:33:58.809829 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-02 00:33:58.809836 | orchestrator | Monday 02 February 2026 00:33:57 +0000 (0:00:01.506) 0:06:52.766 ******* 2026-02-02 00:33:58.809844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:33:58.809851 | orchestrator | 2026-02-02 00:33:58.809858 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-02 00:33:58.809865 | orchestrator | Monday 02 February 2026 00:33:58 +0000 (0:00:00.927) 0:06:53.694 ******* 2026-02-02 00:33:58.809873 | orchestrator | 2026-02-02 00:33:58.809880 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-02 00:33:58.809887 | orchestrator | Monday 02 February 2026 00:33:58 +0000 (0:00:00.043) 0:06:53.737 ******* 2026-02-02 00:33:58.809894 | orchestrator | 2026-02-02 00:33:58.809901 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-02 00:33:58.809908 | orchestrator | Monday 02 February 2026 00:33:58 +0000 (0:00:00.045) 0:06:53.782 ******* 2026-02-02 00:33:58.809922 | orchestrator | 2026-02-02 00:33:58.809929 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-02 00:33:58.809942 | orchestrator | Monday 02 February 2026 00:33:58 +0000 (0:00:00.052) 0:06:53.835 ******* 2026-02-02 00:34:27.361619 | orchestrator | 2026-02-02 00:34:27.361696 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-02 00:34:27.361704 | orchestrator | Monday 02 February 2026 00:33:58 +0000 (0:00:00.040) 0:06:53.875 ******* 2026-02-02 00:34:27.361710 | orchestrator | 2026-02-02 00:34:27.361715 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-02 00:34:27.361720 | orchestrator | Monday 02 February 2026 00:33:58 +0000 (0:00:00.040) 0:06:53.916 ******* 2026-02-02 00:34:27.361724 | orchestrator | 2026-02-02 00:34:27.361729 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-02 00:34:27.361734 | orchestrator | Monday 02 February 2026 00:33:58 +0000 (0:00:00.055) 0:06:53.971 ******* 2026-02-02 00:34:27.361739 | orchestrator | 2026-02-02 00:34:27.361743 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-02 00:34:27.361749 | orchestrator | Monday 02 February 2026 00:33:58 +0000 (0:00:00.042) 0:06:54.013 ******* 2026-02-02 00:34:27.361753 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:34:27.361759 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:34:27.361763 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:34:27.361768 | orchestrator | 2026-02-02 00:34:27.361772 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-02 00:34:27.361777 | orchestrator | Monday 02 February 2026 00:34:00 +0000 (0:00:01.438) 0:06:55.452 ******* 2026-02-02 00:34:27.361781 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:34:27.361787 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:34:27.361792 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:34:27.361796 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:34:27.361801 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:34:27.361805 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:34:27.361810 | orchestrator | changed: [testbed-manager] 2026-02-02 00:34:27.361814 | orchestrator | 2026-02-02 00:34:27.361819 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-02 00:34:27.361823 | orchestrator | Monday 02 February 2026 00:34:02 +0000 (0:00:01.705) 0:06:57.158 ******* 2026-02-02 00:34:27.361828 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:34:27.361833 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:34:27.361837 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:34:27.361841 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:34:27.361846 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:34:27.361850 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:34:27.361855 | orchestrator | changed: [testbed-manager] 2026-02-02 00:34:27.361859 | orchestrator | 2026-02-02 00:34:27.361864 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-02 00:34:27.361868 | orchestrator | Monday 02 February 2026 00:34:03 +0000 (0:00:01.284) 0:06:58.442 ******* 2026-02-02 00:34:27.361873 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:34:27.361877 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:34:27.361882 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:34:27.361886 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:34:27.361891 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:34:27.361895 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:34:27.361900 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:34:27.361904 | orchestrator | 2026-02-02 00:34:27.361909 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-02 00:34:27.361913 | orchestrator | Monday 02 February 2026 00:34:05 +0000 (0:00:02.350) 0:07:00.793 ******* 2026-02-02 00:34:27.361918 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:34:27.361922 | orchestrator | 2026-02-02 00:34:27.361927 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-02 00:34:27.361931 | orchestrator | Monday 02 February 2026 00:34:05 +0000 (0:00:00.111) 0:07:00.905 ******* 2026-02-02 00:34:27.361952 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:34:27.361958 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:34:27.361962 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:34:27.361967 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:34:27.361971 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:34:27.361976 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:34:27.361980 | orchestrator | ok: [testbed-manager] 2026-02-02 00:34:27.361985 | orchestrator | 2026-02-02 00:34:27.362000 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-02 00:34:27.362005 | orchestrator | Monday 02 February 2026 00:34:06 +0000 (0:00:01.042) 0:07:01.948 ******* 2026-02-02 00:34:27.362009 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:34:27.362050 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:34:27.362056 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:34:27.362061 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:34:27.362065 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:34:27.362070 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:34:27.362074 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:34:27.362079 | orchestrator | 2026-02-02 00:34:27.362083 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-02 00:34:27.362088 | orchestrator | Monday 02 February 2026 00:34:07 +0000 (0:00:00.607) 0:07:02.555 ******* 2026-02-02 00:34:27.362093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:34:27.362140 | orchestrator | 2026-02-02 00:34:27.362145 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-02 00:34:27.362149 | orchestrator | Monday 02 February 2026 00:34:08 +0000 (0:00:01.460) 0:07:04.016 ******* 2026-02-02 00:34:27.362154 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:34:27.362159 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:34:27.362163 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:34:27.362168 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:34:27.362172 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:34:27.362177 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:34:27.362181 | orchestrator | ok: [testbed-manager] 2026-02-02 00:34:27.362186 | orchestrator | 2026-02-02 00:34:27.362191 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-02 00:34:27.362197 | orchestrator | Monday 02 February 2026 00:34:09 +0000 (0:00:00.969) 0:07:04.985 ******* 2026-02-02 00:34:27.362202 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-02 00:34:27.362218 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-02 00:34:27.362224 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-02 00:34:27.362230 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-02 00:34:27.362236 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-02 00:34:27.362241 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-02 00:34:27.362247 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-02 00:34:27.362253 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-02 00:34:27.362258 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-02 00:34:27.362264 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-02 00:34:27.362269 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-02 00:34:27.362274 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-02 00:34:27.362279 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-02 00:34:27.362285 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-02 00:34:27.362290 | orchestrator | 2026-02-02 00:34:27.362296 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-02 00:34:27.362301 | orchestrator | Monday 02 February 2026 00:34:12 +0000 (0:00:02.774) 0:07:07.760 ******* 2026-02-02 00:34:27.362313 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:34:27.362320 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:34:27.362328 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:34:27.362335 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:34:27.362343 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:34:27.362351 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:34:27.362358 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:34:27.362365 | orchestrator | 2026-02-02 00:34:27.362373 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-02 00:34:27.362382 | orchestrator | Monday 02 February 2026 00:34:13 +0000 (0:00:00.615) 0:07:08.375 ******* 2026-02-02 00:34:27.362392 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:34:27.362401 | orchestrator | 2026-02-02 00:34:27.362408 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-02 00:34:27.362416 | orchestrator | Monday 02 February 2026 00:34:14 +0000 (0:00:00.994) 0:07:09.370 ******* 2026-02-02 00:34:27.362424 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:34:27.362432 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:34:27.362439 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:34:27.362447 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:34:27.362455 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:34:27.362464 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:34:27.362472 | orchestrator | ok: [testbed-manager] 2026-02-02 00:34:27.362481 | orchestrator | 2026-02-02 00:34:27.362487 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-02 00:34:27.362492 | orchestrator | Monday 02 February 2026 00:34:15 +0000 (0:00:00.982) 0:07:10.352 ******* 2026-02-02 00:34:27.362498 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:34:27.362503 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:34:27.362508 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:34:27.362514 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:34:27.362519 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:34:27.362524 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:34:27.362530 | orchestrator | ok: [testbed-manager] 2026-02-02 00:34:27.362535 | orchestrator | 2026-02-02 00:34:27.362540 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-02 00:34:27.362546 | orchestrator | Monday 02 February 2026 00:34:16 +0000 (0:00:01.123) 0:07:11.476 ******* 2026-02-02 00:34:27.362551 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:34:27.362562 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:34:27.362568 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:34:27.362573 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:34:27.362578 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:34:27.362584 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:34:27.362589 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:34:27.362595 | orchestrator | 2026-02-02 00:34:27.362600 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-02 00:34:27.362605 | orchestrator | Monday 02 February 2026 00:34:16 +0000 (0:00:00.546) 0:07:12.023 ******* 2026-02-02 00:34:27.362611 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:34:27.362616 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:34:27.362621 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:34:27.362627 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:34:27.362632 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:34:27.362637 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:34:27.362643 | orchestrator | ok: [testbed-manager] 2026-02-02 00:34:27.362648 | orchestrator | 2026-02-02 00:34:27.362654 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-02 00:34:27.362659 | orchestrator | Monday 02 February 2026 00:34:18 +0000 (0:00:01.599) 0:07:13.622 ******* 2026-02-02 00:34:27.362670 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:34:27.362675 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:34:27.362681 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:34:27.362686 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:34:27.362691 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:34:27.362695 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:34:27.362700 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:34:27.362704 | orchestrator | 2026-02-02 00:34:27.362709 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-02 00:34:27.362713 | orchestrator | Monday 02 February 2026 00:34:19 +0000 (0:00:00.599) 0:07:14.221 ******* 2026-02-02 00:34:27.362718 | orchestrator | ok: [testbed-manager] 2026-02-02 00:34:27.362722 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:34:27.362727 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:34:27.362731 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:34:27.362736 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:34:27.362740 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:34:27.362749 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:00.842949 | orchestrator | 2026-02-02 00:35:00.843038 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-02 00:35:00.843049 | orchestrator | Monday 02 February 2026 00:34:27 +0000 (0:00:08.233) 0:07:22.455 ******* 2026-02-02 00:35:00.843056 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:35:00.843064 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:00.843070 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:35:00.843113 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:35:00.843120 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:35:00.843127 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:00.843134 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:35:00.843141 | orchestrator | 2026-02-02 00:35:00.843148 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-02 00:35:00.843154 | orchestrator | Monday 02 February 2026 00:34:29 +0000 (0:00:01.638) 0:07:24.094 ******* 2026-02-02 00:35:00.843161 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:35:00.843168 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:35:00.843174 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:00.843180 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:00.843187 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:35:00.843193 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:35:00.843200 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:35:00.843206 | orchestrator | 2026-02-02 00:35:00.843213 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-02 00:35:00.843219 | orchestrator | Monday 02 February 2026 00:34:30 +0000 (0:00:01.695) 0:07:25.790 ******* 2026-02-02 00:35:00.843225 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:35:00.843231 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:35:00.843238 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:35:00.843244 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:35:00.843250 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:00.843256 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:00.843263 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:35:00.843269 | orchestrator | 2026-02-02 00:35:00.843275 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-02 00:35:00.843282 | orchestrator | Monday 02 February 2026 00:34:32 +0000 (0:00:01.632) 0:07:27.422 ******* 2026-02-02 00:35:00.843288 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:35:00.843294 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:35:00.843301 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:35:00.843307 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:35:00.843313 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:35:00.843319 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:35:00.843326 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:00.843332 | orchestrator | 2026-02-02 00:35:00.843338 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-02 00:35:00.843363 | orchestrator | Monday 02 February 2026 00:34:33 +0000 (0:00:00.851) 0:07:28.274 ******* 2026-02-02 00:35:00.843370 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:35:00.843376 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:35:00.843382 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:35:00.843388 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:35:00.843395 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:35:00.843401 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:35:00.843407 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:35:00.843413 | orchestrator | 2026-02-02 00:35:00.843419 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-02 00:35:00.843426 | orchestrator | Monday 02 February 2026 00:34:34 +0000 (0:00:01.047) 0:07:29.322 ******* 2026-02-02 00:35:00.843432 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:35:00.843438 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:35:00.843444 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:35:00.843450 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:35:00.843456 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:35:00.843462 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:35:00.843468 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:35:00.843474 | orchestrator | 2026-02-02 00:35:00.843480 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-02 00:35:00.843487 | orchestrator | Monday 02 February 2026 00:34:34 +0000 (0:00:00.587) 0:07:29.909 ******* 2026-02-02 00:35:00.843493 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:35:00.843499 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:35:00.843505 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:35:00.843513 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:35:00.843520 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:35:00.843528 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:35:00.843535 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:00.843543 | orchestrator | 2026-02-02 00:35:00.843551 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-02 00:35:00.843559 | orchestrator | Monday 02 February 2026 00:34:35 +0000 (0:00:00.621) 0:07:30.531 ******* 2026-02-02 00:35:00.843566 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:35:00.843574 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:35:00.843582 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:35:00.843589 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:35:00.843597 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:35:00.843604 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:35:00.843611 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:00.843618 | orchestrator | 2026-02-02 00:35:00.843625 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-02 00:35:00.843633 | orchestrator | Monday 02 February 2026 00:34:36 +0000 (0:00:00.725) 0:07:31.257 ******* 2026-02-02 00:35:00.843640 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:35:00.843648 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:35:00.843655 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:35:00.843663 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:35:00.843670 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:35:00.843677 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:35:00.843684 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:00.843691 | orchestrator | 2026-02-02 00:35:00.843699 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-02 00:35:00.843706 | orchestrator | Monday 02 February 2026 00:34:36 +0000 (0:00:00.563) 0:07:31.821 ******* 2026-02-02 00:35:00.843714 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:35:00.843721 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:35:00.843728 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:35:00.843735 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:35:00.843743 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:35:00.843750 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:35:00.843758 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:00.843765 | orchestrator | 2026-02-02 00:35:00.843797 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-02 00:35:00.843809 | orchestrator | Monday 02 February 2026 00:34:42 +0000 (0:00:06.006) 0:07:37.827 ******* 2026-02-02 00:35:00.843820 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:35:00.843830 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:35:00.843841 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:35:00.843850 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:35:00.843860 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:35:00.843871 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:35:00.843881 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:35:00.843891 | orchestrator | 2026-02-02 00:35:00.843902 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-02 00:35:00.843913 | orchestrator | Monday 02 February 2026 00:34:43 +0000 (0:00:00.574) 0:07:38.402 ******* 2026-02-02 00:35:00.843925 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:35:00.843934 | orchestrator | 2026-02-02 00:35:00.843940 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-02 00:35:00.843946 | orchestrator | Monday 02 February 2026 00:34:44 +0000 (0:00:01.039) 0:07:39.442 ******* 2026-02-02 00:35:00.843953 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:35:00.843959 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:35:00.843965 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:35:00.843971 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:35:00.843977 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:00.843997 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:35:00.844003 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:35:00.844009 | orchestrator | 2026-02-02 00:35:00.844015 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-02 00:35:00.844022 | orchestrator | Monday 02 February 2026 00:34:46 +0000 (0:00:01.946) 0:07:41.389 ******* 2026-02-02 00:35:00.844028 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:35:00.844034 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:35:00.844040 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:35:00.844046 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:35:00.844052 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:35:00.844058 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:35:00.844064 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:00.844070 | orchestrator | 2026-02-02 00:35:00.844111 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-02 00:35:00.844118 | orchestrator | Monday 02 February 2026 00:34:47 +0000 (0:00:01.202) 0:07:42.591 ******* 2026-02-02 00:35:00.844124 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:35:00.844130 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:35:00.844136 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:35:00.844142 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:35:00.844148 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:35:00.844154 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:35:00.844160 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:00.844167 | orchestrator | 2026-02-02 00:35:00.844173 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-02 00:35:00.844179 | orchestrator | Monday 02 February 2026 00:34:48 +0000 (0:00:00.909) 0:07:43.501 ******* 2026-02-02 00:35:00.844186 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-02 00:35:00.844193 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-02 00:35:00.844200 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-02 00:35:00.844210 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-02 00:35:00.844222 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-02 00:35:00.844229 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-02 00:35:00.844235 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-02 00:35:00.844241 | orchestrator | 2026-02-02 00:35:00.844247 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-02 00:35:00.844253 | orchestrator | Monday 02 February 2026 00:34:50 +0000 (0:00:01.935) 0:07:45.437 ******* 2026-02-02 00:35:00.844260 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:35:00.844266 | orchestrator | 2026-02-02 00:35:00.844273 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-02 00:35:00.844279 | orchestrator | Monday 02 February 2026 00:34:51 +0000 (0:00:00.898) 0:07:46.335 ******* 2026-02-02 00:35:00.844285 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:35:00.844291 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:35:00.844297 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:00.844304 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:35:00.844310 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:35:00.844316 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:35:00.844322 | orchestrator | changed: [testbed-manager] 2026-02-02 00:35:00.844328 | orchestrator | 2026-02-02 00:35:00.844340 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-02 00:35:32.244698 | orchestrator | Monday 02 February 2026 00:35:00 +0000 (0:00:09.536) 0:07:55.872 ******* 2026-02-02 00:35:32.244805 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:35:32.244822 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:35:32.244834 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:35:32.244844 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:35:32.244855 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:35:32.244866 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:35:32.244877 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:32.244888 | orchestrator | 2026-02-02 00:35:32.244900 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-02 00:35:32.244912 | orchestrator | Monday 02 February 2026 00:35:02 +0000 (0:00:02.098) 0:07:57.970 ******* 2026-02-02 00:35:32.244923 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:35:32.244934 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:35:32.244945 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:35:32.244956 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:35:32.244966 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:35:32.244977 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:35:32.244988 | orchestrator | 2026-02-02 00:35:32.244999 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-02 00:35:32.245009 | orchestrator | Monday 02 February 2026 00:35:04 +0000 (0:00:01.364) 0:07:59.334 ******* 2026-02-02 00:35:32.245020 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:35:32.245032 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:32.245043 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:35:32.245101 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:35:32.245115 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:35:32.245126 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:35:32.245136 | orchestrator | changed: [testbed-manager] 2026-02-02 00:35:32.245147 | orchestrator | 2026-02-02 00:35:32.245158 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-02 00:35:32.245169 | orchestrator | 2026-02-02 00:35:32.245180 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-02 00:35:32.245215 | orchestrator | Monday 02 February 2026 00:35:05 +0000 (0:00:01.393) 0:08:00.728 ******* 2026-02-02 00:35:32.245229 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:35:32.245242 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:35:32.245255 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:35:32.245268 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:35:32.245280 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:35:32.245293 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:35:32.245306 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:35:32.245323 | orchestrator | 2026-02-02 00:35:32.245341 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-02 00:35:32.245361 | orchestrator | 2026-02-02 00:35:32.245380 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-02 00:35:32.245399 | orchestrator | Monday 02 February 2026 00:35:06 +0000 (0:00:00.753) 0:08:01.482 ******* 2026-02-02 00:35:32.245412 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:35:32.245427 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:32.245440 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:35:32.245453 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:35:32.245465 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:35:32.245476 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:35:32.245487 | orchestrator | changed: [testbed-manager] 2026-02-02 00:35:32.245497 | orchestrator | 2026-02-02 00:35:32.245508 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-02 00:35:32.245519 | orchestrator | Monday 02 February 2026 00:35:07 +0000 (0:00:01.326) 0:08:02.808 ******* 2026-02-02 00:35:32.245530 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:35:32.245540 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:35:32.245551 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:35:32.245561 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:35:32.245572 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:35:32.245582 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:35:32.245593 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:32.245603 | orchestrator | 2026-02-02 00:35:32.245614 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-02 00:35:32.245625 | orchestrator | Monday 02 February 2026 00:35:09 +0000 (0:00:01.461) 0:08:04.270 ******* 2026-02-02 00:35:32.245650 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:35:32.245661 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:35:32.245672 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:35:32.245683 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:35:32.245694 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:35:32.245704 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:35:32.245715 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:35:32.245733 | orchestrator | 2026-02-02 00:35:32.245754 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-02 00:35:32.245769 | orchestrator | Monday 02 February 2026 00:35:09 +0000 (0:00:00.529) 0:08:04.800 ******* 2026-02-02 00:35:32.245780 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:35:32.245793 | orchestrator | 2026-02-02 00:35:32.245804 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-02 00:35:32.245815 | orchestrator | Monday 02 February 2026 00:35:10 +0000 (0:00:01.052) 0:08:05.852 ******* 2026-02-02 00:35:32.245828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:35:32.245850 | orchestrator | 2026-02-02 00:35:32.245885 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-02 00:35:32.245898 | orchestrator | Monday 02 February 2026 00:35:11 +0000 (0:00:00.878) 0:08:06.730 ******* 2026-02-02 00:35:32.245929 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:35:32.245941 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:35:32.245952 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:35:32.245962 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:32.245973 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:35:32.245984 | orchestrator | changed: [testbed-manager] 2026-02-02 00:35:32.245995 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:35:32.246006 | orchestrator | 2026-02-02 00:35:32.246131 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-02 00:35:32.246158 | orchestrator | Monday 02 February 2026 00:35:20 +0000 (0:00:09.064) 0:08:15.795 ******* 2026-02-02 00:35:32.246171 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:35:32.246181 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:35:32.246192 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:32.246203 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:35:32.246214 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:35:32.246224 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:35:32.246241 | orchestrator | changed: [testbed-manager] 2026-02-02 00:35:32.246257 | orchestrator | 2026-02-02 00:35:32.246268 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-02 00:35:32.246279 | orchestrator | Monday 02 February 2026 00:35:21 +0000 (0:00:00.855) 0:08:16.650 ******* 2026-02-02 00:35:32.246290 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:35:32.246300 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:35:32.246311 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:32.246322 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:35:32.246332 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:35:32.246343 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:35:32.246354 | orchestrator | changed: [testbed-manager] 2026-02-02 00:35:32.246364 | orchestrator | 2026-02-02 00:35:32.246375 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-02 00:35:32.246386 | orchestrator | Monday 02 February 2026 00:35:22 +0000 (0:00:01.312) 0:08:17.963 ******* 2026-02-02 00:35:32.246397 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:35:32.246408 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:32.246419 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:35:32.246429 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:35:32.246440 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:35:32.246450 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:35:32.246461 | orchestrator | changed: [testbed-manager] 2026-02-02 00:35:32.246471 | orchestrator | 2026-02-02 00:35:32.246483 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-02 00:35:32.246493 | orchestrator | Monday 02 February 2026 00:35:24 +0000 (0:00:01.956) 0:08:19.919 ******* 2026-02-02 00:35:32.246504 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:35:32.246515 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:35:32.246525 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:32.246536 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:35:32.246547 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:35:32.246557 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:35:32.246568 | orchestrator | changed: [testbed-manager] 2026-02-02 00:35:32.246579 | orchestrator | 2026-02-02 00:35:32.246589 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-02 00:35:32.246600 | orchestrator | Monday 02 February 2026 00:35:26 +0000 (0:00:01.234) 0:08:21.154 ******* 2026-02-02 00:35:32.246611 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:35:32.246622 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:35:32.246633 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:32.246643 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:35:32.246654 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:35:32.246665 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:35:32.246675 | orchestrator | changed: [testbed-manager] 2026-02-02 00:35:32.246686 | orchestrator | 2026-02-02 00:35:32.246705 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-02 00:35:32.246717 | orchestrator | 2026-02-02 00:35:32.246727 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-02 00:35:32.246738 | orchestrator | Monday 02 February 2026 00:35:27 +0000 (0:00:01.092) 0:08:22.246 ******* 2026-02-02 00:35:32.246749 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:35:32.246760 | orchestrator | 2026-02-02 00:35:32.246771 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-02 00:35:32.246782 | orchestrator | Monday 02 February 2026 00:35:28 +0000 (0:00:00.812) 0:08:23.060 ******* 2026-02-02 00:35:32.246800 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:35:32.246811 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:35:32.246822 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:35:32.246833 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:35:32.246844 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:35:32.246855 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:35:32.246866 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:32.246876 | orchestrator | 2026-02-02 00:35:32.246887 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-02 00:35:32.246898 | orchestrator | Monday 02 February 2026 00:35:29 +0000 (0:00:01.082) 0:08:24.142 ******* 2026-02-02 00:35:32.246909 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:32.246920 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:35:32.246931 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:35:32.246950 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:35:32.246970 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:35:32.246988 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:35:32.246999 | orchestrator | changed: [testbed-manager] 2026-02-02 00:35:32.247043 | orchestrator | 2026-02-02 00:35:32.247078 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-02 00:35:32.247091 | orchestrator | Monday 02 February 2026 00:35:30 +0000 (0:00:01.234) 0:08:25.377 ******* 2026-02-02 00:35:32.247103 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 00:35:32.247113 | orchestrator | 2026-02-02 00:35:32.247124 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-02 00:35:32.247135 | orchestrator | Monday 02 February 2026 00:35:31 +0000 (0:00:01.045) 0:08:26.422 ******* 2026-02-02 00:35:32.247146 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:35:32.247157 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:35:32.247168 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:35:32.247178 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:35:32.247189 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:35:32.247199 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:35:32.247210 | orchestrator | ok: [testbed-manager] 2026-02-02 00:35:32.247221 | orchestrator | 2026-02-02 00:35:32.247241 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-02 00:35:33.943645 | orchestrator | Monday 02 February 2026 00:35:32 +0000 (0:00:00.849) 0:08:27.271 ******* 2026-02-02 00:35:33.943747 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:35:33.943762 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:35:33.943773 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:35:33.943784 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:35:33.943795 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:35:33.943806 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:35:33.943816 | orchestrator | changed: [testbed-manager] 2026-02-02 00:35:33.943827 | orchestrator | 2026-02-02 00:35:33.943839 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:35:33.943852 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-02 00:35:33.943892 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-02 00:35:33.943904 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-02 00:35:33.943915 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-02 00:35:33.943925 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-02 00:35:33.943936 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-02 00:35:33.943947 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-02 00:35:33.943958 | orchestrator | 2026-02-02 00:35:33.943969 | orchestrator | 2026-02-02 00:35:33.943980 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:35:33.943991 | orchestrator | Monday 02 February 2026 00:35:33 +0000 (0:00:01.123) 0:08:28.395 ******* 2026-02-02 00:35:33.944002 | orchestrator | =============================================================================== 2026-02-02 00:35:33.944013 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.38s 2026-02-02 00:35:33.944024 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.74s 2026-02-02 00:35:33.944034 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.08s 2026-02-02 00:35:33.944045 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.85s 2026-02-02 00:35:33.944077 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.88s 2026-02-02 00:35:33.944090 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.85s 2026-02-02 00:35:33.944100 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.98s 2026-02-02 00:35:33.944111 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.73s 2026-02-02 00:35:33.944122 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.54s 2026-02-02 00:35:33.944133 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.51s 2026-02-02 00:35:33.944158 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.06s 2026-02-02 00:35:33.944171 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.58s 2026-02-02 00:35:33.944184 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.32s 2026-02-02 00:35:33.944197 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.32s 2026-02-02 00:35:33.944211 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.28s 2026-02-02 00:35:33.944223 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.23s 2026-02-02 00:35:33.944235 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.92s 2026-02-02 00:35:33.944247 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.75s 2026-02-02 00:35:33.944259 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.29s 2026-02-02 00:35:33.944272 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 6.01s 2026-02-02 00:35:34.305202 | orchestrator | + osism apply fail2ban 2026-02-02 00:35:47.166269 | orchestrator | 2026-02-02 00:35:47 | INFO  | Prepare task for execution of fail2ban. 2026-02-02 00:35:47.246489 | orchestrator | 2026-02-02 00:35:47 | INFO  | Task 92cbd60f-a648-4b75-9e2e-644d4d179ce0 (fail2ban) was prepared for execution. 2026-02-02 00:35:47.246612 | orchestrator | 2026-02-02 00:35:47 | INFO  | It takes a moment until task 92cbd60f-a648-4b75-9e2e-644d4d179ce0 (fail2ban) has been started and output is visible here. 2026-02-02 00:36:09.865559 | orchestrator | 2026-02-02 00:36:09.865697 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-02 00:36:09.865716 | orchestrator | 2026-02-02 00:36:09.865728 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-02 00:36:09.865741 | orchestrator | Monday 02 February 2026 00:35:51 +0000 (0:00:00.303) 0:00:00.303 ******* 2026-02-02 00:36:09.865754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:36:09.865768 | orchestrator | 2026-02-02 00:36:09.865779 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-02 00:36:09.865790 | orchestrator | Monday 02 February 2026 00:35:53 +0000 (0:00:01.210) 0:00:01.513 ******* 2026-02-02 00:36:09.865802 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:36:09.865814 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:36:09.865825 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:36:09.865836 | orchestrator | changed: [testbed-manager] 2026-02-02 00:36:09.865846 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:36:09.865857 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:36:09.865868 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:36:09.865879 | orchestrator | 2026-02-02 00:36:09.865890 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-02 00:36:09.865901 | orchestrator | Monday 02 February 2026 00:36:04 +0000 (0:00:11.445) 0:00:12.959 ******* 2026-02-02 00:36:09.865912 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:36:09.865923 | orchestrator | changed: [testbed-manager] 2026-02-02 00:36:09.865933 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:36:09.865944 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:36:09.865955 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:36:09.865966 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:36:09.865977 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:36:09.865988 | orchestrator | 2026-02-02 00:36:09.865999 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-02 00:36:09.866010 | orchestrator | Monday 02 February 2026 00:36:06 +0000 (0:00:01.580) 0:00:14.539 ******* 2026-02-02 00:36:09.866127 | orchestrator | ok: [testbed-manager] 2026-02-02 00:36:09.866144 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:36:09.866157 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:36:09.866170 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:36:09.866183 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:36:09.866196 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:36:09.866208 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:36:09.866221 | orchestrator | 2026-02-02 00:36:09.866234 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-02 00:36:09.866247 | orchestrator | Monday 02 February 2026 00:36:07 +0000 (0:00:01.523) 0:00:16.063 ******* 2026-02-02 00:36:09.866261 | orchestrator | changed: [testbed-manager] 2026-02-02 00:36:09.866275 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:36:09.866288 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:36:09.866302 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:36:09.866315 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:36:09.866329 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:36:09.866342 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:36:09.866354 | orchestrator | 2026-02-02 00:36:09.866368 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:36:09.866382 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:36:09.866395 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:36:09.866451 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:36:09.866476 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:36:09.866513 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:36:09.866533 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:36:09.866553 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:36:09.866571 | orchestrator | 2026-02-02 00:36:09.866591 | orchestrator | 2026-02-02 00:36:09.866609 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:36:09.866627 | orchestrator | Monday 02 February 2026 00:36:09 +0000 (0:00:01.685) 0:00:17.748 ******* 2026-02-02 00:36:09.866647 | orchestrator | =============================================================================== 2026-02-02 00:36:09.866668 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.45s 2026-02-02 00:36:09.866688 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.69s 2026-02-02 00:36:09.866708 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.58s 2026-02-02 00:36:09.866726 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.52s 2026-02-02 00:36:09.866748 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.21s 2026-02-02 00:36:10.193377 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-02 00:36:10.193476 | orchestrator | + osism apply network 2026-02-02 00:36:22.256889 | orchestrator | 2026-02-02 00:36:22 | INFO  | Prepare task for execution of network. 2026-02-02 00:36:22.328793 | orchestrator | 2026-02-02 00:36:22 | INFO  | Task 187cfa26-94a8-4515-bd2c-6f5ec0e53a4a (network) was prepared for execution. 2026-02-02 00:36:22.328873 | orchestrator | 2026-02-02 00:36:22 | INFO  | It takes a moment until task 187cfa26-94a8-4515-bd2c-6f5ec0e53a4a (network) has been started and output is visible here. 2026-02-02 00:36:53.636461 | orchestrator | 2026-02-02 00:36:53.636603 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-02 00:36:53.636632 | orchestrator | 2026-02-02 00:36:53.636651 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-02 00:36:53.636671 | orchestrator | Monday 02 February 2026 00:36:26 +0000 (0:00:00.278) 0:00:00.278 ******* 2026-02-02 00:36:53.636689 | orchestrator | ok: [testbed-manager] 2026-02-02 00:36:53.636709 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:36:53.636728 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:36:53.636746 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:36:53.636763 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:36:53.636781 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:36:53.636798 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:36:53.636818 | orchestrator | 2026-02-02 00:36:53.636838 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-02 00:36:53.636856 | orchestrator | Monday 02 February 2026 00:36:27 +0000 (0:00:00.750) 0:00:01.028 ******* 2026-02-02 00:36:53.636877 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:36:53.636897 | orchestrator | 2026-02-02 00:36:53.636915 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-02 00:36:53.636934 | orchestrator | Monday 02 February 2026 00:36:28 +0000 (0:00:01.241) 0:00:02.270 ******* 2026-02-02 00:36:53.636988 | orchestrator | ok: [testbed-manager] 2026-02-02 00:36:53.637044 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:36:53.637066 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:36:53.637086 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:36:53.637105 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:36:53.637125 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:36:53.637146 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:36:53.637166 | orchestrator | 2026-02-02 00:36:53.637186 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-02 00:36:53.637205 | orchestrator | Monday 02 February 2026 00:36:30 +0000 (0:00:02.210) 0:00:04.481 ******* 2026-02-02 00:36:53.637224 | orchestrator | ok: [testbed-manager] 2026-02-02 00:36:53.637242 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:36:53.637261 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:36:53.637281 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:36:53.637302 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:36:53.637320 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:36:53.637339 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:36:53.637357 | orchestrator | 2026-02-02 00:36:53.637374 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-02 00:36:53.637392 | orchestrator | Monday 02 February 2026 00:36:33 +0000 (0:00:02.096) 0:00:06.577 ******* 2026-02-02 00:36:53.637408 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-02 00:36:53.637427 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-02 00:36:53.637445 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-02 00:36:53.637463 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-02 00:36:53.637481 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-02 00:36:53.637499 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-02 00:36:53.637517 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-02 00:36:53.637536 | orchestrator | 2026-02-02 00:36:53.637555 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-02 00:36:53.637574 | orchestrator | Monday 02 February 2026 00:36:34 +0000 (0:00:01.005) 0:00:07.582 ******* 2026-02-02 00:36:53.637593 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 00:36:53.637612 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 00:36:53.637629 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 00:36:53.637648 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 00:36:53.637667 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-02 00:36:53.637688 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 00:36:53.637706 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-02 00:36:53.637724 | orchestrator | 2026-02-02 00:36:53.637743 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-02 00:36:53.637764 | orchestrator | Monday 02 February 2026 00:36:37 +0000 (0:00:03.628) 0:00:11.210 ******* 2026-02-02 00:36:53.637785 | orchestrator | changed: [testbed-manager] 2026-02-02 00:36:53.637806 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:36:53.637826 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:36:53.637844 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:36:53.637863 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:36:53.637881 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:36:53.637898 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:36:53.637916 | orchestrator | 2026-02-02 00:36:53.637934 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-02 00:36:53.637953 | orchestrator | Monday 02 February 2026 00:36:39 +0000 (0:00:01.767) 0:00:12.978 ******* 2026-02-02 00:36:53.637972 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 00:36:53.637991 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-02 00:36:53.638137 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 00:36:53.638152 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-02 00:36:53.638163 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 00:36:53.638175 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 00:36:53.638208 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 00:36:53.638228 | orchestrator | 2026-02-02 00:36:53.638246 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-02 00:36:53.638263 | orchestrator | Monday 02 February 2026 00:36:41 +0000 (0:00:01.915) 0:00:14.894 ******* 2026-02-02 00:36:53.638281 | orchestrator | ok: [testbed-manager] 2026-02-02 00:36:53.638299 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:36:53.638315 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:36:53.638330 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:36:53.638348 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:36:53.638368 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:36:53.638386 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:36:53.638405 | orchestrator | 2026-02-02 00:36:53.638425 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-02 00:36:53.638473 | orchestrator | Monday 02 February 2026 00:36:42 +0000 (0:00:01.187) 0:00:16.081 ******* 2026-02-02 00:36:53.638494 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:36:53.638513 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:36:53.638531 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:36:53.638549 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:36:53.638567 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:36:53.638587 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:36:53.638606 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:36:53.638627 | orchestrator | 2026-02-02 00:36:53.638648 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-02 00:36:53.638692 | orchestrator | Monday 02 February 2026 00:36:43 +0000 (0:00:00.674) 0:00:16.756 ******* 2026-02-02 00:36:53.638712 | orchestrator | ok: [testbed-manager] 2026-02-02 00:36:53.638731 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:36:53.638751 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:36:53.638771 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:36:53.638791 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:36:53.638810 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:36:53.638828 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:36:53.638847 | orchestrator | 2026-02-02 00:36:53.638867 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-02 00:36:53.638887 | orchestrator | Monday 02 February 2026 00:36:45 +0000 (0:00:02.334) 0:00:19.091 ******* 2026-02-02 00:36:53.638905 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:36:53.638925 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:36:53.638943 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:36:53.638961 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:36:53.638977 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:36:53.638995 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:36:53.639094 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-02 00:36:53.639117 | orchestrator | 2026-02-02 00:36:53.639136 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-02 00:36:53.639155 | orchestrator | Monday 02 February 2026 00:36:46 +0000 (0:00:00.932) 0:00:20.023 ******* 2026-02-02 00:36:53.639173 | orchestrator | ok: [testbed-manager] 2026-02-02 00:36:53.639190 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:36:53.639208 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:36:53.639226 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:36:53.639244 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:36:53.639261 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:36:53.639371 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:36:53.639395 | orchestrator | 2026-02-02 00:36:53.639410 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-02 00:36:53.639428 | orchestrator | Monday 02 February 2026 00:36:48 +0000 (0:00:01.766) 0:00:21.790 ******* 2026-02-02 00:36:53.639445 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:36:53.639483 | orchestrator | 2026-02-02 00:36:53.639502 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-02 00:36:53.639518 | orchestrator | Monday 02 February 2026 00:36:49 +0000 (0:00:01.313) 0:00:23.103 ******* 2026-02-02 00:36:53.639535 | orchestrator | ok: [testbed-manager] 2026-02-02 00:36:53.639553 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:36:53.639572 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:36:53.639591 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:36:53.639609 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:36:53.639625 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:36:53.639642 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:36:53.639656 | orchestrator | 2026-02-02 00:36:53.639670 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-02 00:36:53.639686 | orchestrator | Monday 02 February 2026 00:36:51 +0000 (0:00:02.009) 0:00:25.112 ******* 2026-02-02 00:36:53.639712 | orchestrator | ok: [testbed-manager] 2026-02-02 00:36:53.639727 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:36:53.639743 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:36:53.639758 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:36:53.639774 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:36:53.639789 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:36:53.639805 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:36:53.639820 | orchestrator | 2026-02-02 00:36:53.639836 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-02 00:36:53.639852 | orchestrator | Monday 02 February 2026 00:36:52 +0000 (0:00:00.701) 0:00:25.814 ******* 2026-02-02 00:36:53.639869 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-02 00:36:53.639884 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-02 00:36:53.639900 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-02 00:36:53.639917 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-02 00:36:53.639934 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-02 00:36:53.639950 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-02 00:36:53.639966 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-02 00:36:53.639981 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-02 00:36:53.639997 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-02 00:36:53.640044 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-02 00:36:53.640060 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-02 00:36:53.640076 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-02 00:36:53.640092 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-02 00:36:53.640109 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-02 00:36:53.640126 | orchestrator | 2026-02-02 00:36:53.640164 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-02 00:37:10.740196 | orchestrator | Monday 02 February 2026 00:36:53 +0000 (0:00:01.298) 0:00:27.112 ******* 2026-02-02 00:37:10.740294 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:37:10.740306 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:37:10.740314 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:37:10.740321 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:37:10.740328 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:37:10.740336 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:37:10.740343 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:37:10.740350 | orchestrator | 2026-02-02 00:37:10.740358 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-02 00:37:10.740385 | orchestrator | Monday 02 February 2026 00:36:54 +0000 (0:00:00.667) 0:00:27.779 ******* 2026-02-02 00:37:10.740394 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-0, testbed-node-2, testbed-node-5 2026-02-02 00:37:10.740404 | orchestrator | 2026-02-02 00:37:10.740411 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-02 00:37:10.740419 | orchestrator | Monday 02 February 2026 00:36:59 +0000 (0:00:04.829) 0:00:32.609 ******* 2026-02-02 00:37:10.740427 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-02 00:37:10.740437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-02 00:37:10.740445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-02 00:37:10.740453 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-02 00:37:10.740460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-02 00:37:10.740467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-02 00:37:10.740487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-02 00:37:10.740495 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-02 00:37:10.740502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-02 00:37:10.740509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-02 00:37:10.740522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-02 00:37:10.740542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-02 00:37:10.740556 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-02 00:37:10.740564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-02 00:37:10.740571 | orchestrator | 2026-02-02 00:37:10.740578 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-02 00:37:10.740585 | orchestrator | Monday 02 February 2026 00:37:05 +0000 (0:00:05.953) 0:00:38.563 ******* 2026-02-02 00:37:10.740593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-02 00:37:10.740600 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-02 00:37:10.740607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-02 00:37:10.740615 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-02 00:37:10.740622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-02 00:37:10.740629 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-02 00:37:10.740640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-02 00:37:10.740647 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-02 00:37:10.740655 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-02 00:37:10.740662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-02 00:37:10.740669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-02 00:37:10.740681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-02 00:37:10.740694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-02 00:37:25.272158 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-02 00:37:25.272344 | orchestrator | 2026-02-02 00:37:25.272362 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-02 00:37:25.272374 | orchestrator | Monday 02 February 2026 00:37:11 +0000 (0:00:05.951) 0:00:44.515 ******* 2026-02-02 00:37:25.272386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:37:25.272397 | orchestrator | 2026-02-02 00:37:25.272407 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-02 00:37:25.272418 | orchestrator | Monday 02 February 2026 00:37:12 +0000 (0:00:01.323) 0:00:45.838 ******* 2026-02-02 00:37:25.272428 | orchestrator | ok: [testbed-manager] 2026-02-02 00:37:25.272438 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:37:25.272448 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:37:25.272457 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:37:25.272467 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:37:25.272476 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:37:25.272485 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:37:25.272495 | orchestrator | 2026-02-02 00:37:25.272504 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-02 00:37:25.272514 | orchestrator | Monday 02 February 2026 00:37:14 +0000 (0:00:01.733) 0:00:47.572 ******* 2026-02-02 00:37:25.272524 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-02 00:37:25.272564 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-02 00:37:25.272574 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-02 00:37:25.272584 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-02 00:37:25.272594 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-02 00:37:25.272604 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-02 00:37:25.272614 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-02 00:37:25.272624 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:37:25.272635 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-02 00:37:25.272644 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-02 00:37:25.272654 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-02 00:37:25.272664 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-02 00:37:25.272674 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-02 00:37:25.272686 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:37:25.272706 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-02 00:37:25.272736 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-02 00:37:25.272749 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-02 00:37:25.272760 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-02 00:37:25.272772 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:37:25.272783 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-02 00:37:25.272795 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-02 00:37:25.272806 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-02 00:37:25.272817 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-02 00:37:25.272829 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:37:25.272840 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-02 00:37:25.272852 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-02 00:37:25.272863 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-02 00:37:25.272873 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-02 00:37:25.272884 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:37:25.272896 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:37:25.272907 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-02 00:37:25.272918 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-02 00:37:25.272929 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-02 00:37:25.272940 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-02 00:37:25.272980 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:37:25.273036 | orchestrator | 2026-02-02 00:37:25.273048 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-02-02 00:37:25.273076 | orchestrator | Monday 02 February 2026 00:37:15 +0000 (0:00:00.997) 0:00:48.569 ******* 2026-02-02 00:37:25.273087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:37:25.273097 | orchestrator | 2026-02-02 00:37:25.273106 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-02-02 00:37:25.273116 | orchestrator | Monday 02 February 2026 00:37:16 +0000 (0:00:01.289) 0:00:49.858 ******* 2026-02-02 00:37:25.273126 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:37:25.273136 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:37:25.273145 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:37:25.273155 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:37:25.273165 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:37:25.273174 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:37:25.273183 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:37:25.273193 | orchestrator | 2026-02-02 00:37:25.273219 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-02-02 00:37:25.273229 | orchestrator | Monday 02 February 2026 00:37:17 +0000 (0:00:00.656) 0:00:50.515 ******* 2026-02-02 00:37:25.273239 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:37:25.273248 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:37:25.273258 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:37:25.273267 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:37:25.273276 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:37:25.273286 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:37:25.273296 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:37:25.273311 | orchestrator | 2026-02-02 00:37:25.273349 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-02-02 00:37:25.273369 | orchestrator | Monday 02 February 2026 00:37:17 +0000 (0:00:00.884) 0:00:51.399 ******* 2026-02-02 00:37:25.273384 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:37:25.273399 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:37:25.273413 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:37:25.273429 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:37:25.273445 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:37:25.273459 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:37:25.273520 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:37:25.273535 | orchestrator | 2026-02-02 00:37:25.273551 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-02-02 00:37:25.273566 | orchestrator | Monday 02 February 2026 00:37:18 +0000 (0:00:00.696) 0:00:52.096 ******* 2026-02-02 00:37:25.273581 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:37:25.273597 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:37:25.273612 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:37:25.273627 | orchestrator | ok: [testbed-manager] 2026-02-02 00:37:25.273642 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:37:25.273657 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:37:25.273672 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:37:25.273688 | orchestrator | 2026-02-02 00:37:25.273704 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-02-02 00:37:25.273720 | orchestrator | Monday 02 February 2026 00:37:20 +0000 (0:00:01.866) 0:00:53.962 ******* 2026-02-02 00:37:25.273737 | orchestrator | ok: [testbed-manager] 2026-02-02 00:37:25.273753 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:37:25.273767 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:37:25.273780 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:37:25.273795 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:37:25.273810 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:37:25.273826 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:37:25.273842 | orchestrator | 2026-02-02 00:37:25.273868 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-02-02 00:37:25.273884 | orchestrator | Monday 02 February 2026 00:37:21 +0000 (0:00:01.014) 0:00:54.977 ******* 2026-02-02 00:37:25.273900 | orchestrator | ok: [testbed-manager] 2026-02-02 00:37:25.273916 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:37:25.273932 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:37:25.273947 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:37:25.273961 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:37:25.273977 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:37:25.274170 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:37:25.274190 | orchestrator | 2026-02-02 00:37:25.274207 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-02 00:37:25.274226 | orchestrator | Monday 02 February 2026 00:37:23 +0000 (0:00:02.340) 0:00:57.317 ******* 2026-02-02 00:37:25.274243 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:37:25.274260 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:37:25.274277 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:37:25.274292 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:37:25.274307 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:37:25.274322 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:37:25.274337 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:37:25.274352 | orchestrator | 2026-02-02 00:37:25.274367 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-02 00:37:25.274384 | orchestrator | Monday 02 February 2026 00:37:24 +0000 (0:00:00.841) 0:00:58.159 ******* 2026-02-02 00:37:25.274400 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:37:25.274416 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:37:25.274431 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:37:25.274448 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:37:25.274462 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:37:25.274478 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:37:25.274513 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:37:25.274531 | orchestrator | 2026-02-02 00:37:25.274546 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:37:25.274565 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-02 00:37:25.274583 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-02 00:37:25.274619 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-02 00:37:25.714568 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-02 00:37:25.714780 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-02 00:37:25.714808 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-02 00:37:25.714827 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-02 00:37:25.714844 | orchestrator | 2026-02-02 00:37:25.714862 | orchestrator | 2026-02-02 00:37:25.714884 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:37:25.714908 | orchestrator | Monday 02 February 2026 00:37:25 +0000 (0:00:00.590) 0:00:58.749 ******* 2026-02-02 00:37:25.714925 | orchestrator | =============================================================================== 2026-02-02 00:37:25.714937 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.95s 2026-02-02 00:37:25.714947 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.95s 2026-02-02 00:37:25.714958 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.83s 2026-02-02 00:37:25.714969 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.63s 2026-02-02 00:37:25.714980 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.34s 2026-02-02 00:37:25.715094 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.33s 2026-02-02 00:37:25.715109 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.21s 2026-02-02 00:37:25.715122 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 2.10s 2026-02-02 00:37:25.715135 | orchestrator | osism.commons.network : List existing configuration files --------------- 2.01s 2026-02-02 00:37:25.715148 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.92s 2026-02-02 00:37:25.715167 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.87s 2026-02-02 00:37:25.715186 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.77s 2026-02-02 00:37:25.715203 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.77s 2026-02-02 00:37:25.715222 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.73s 2026-02-02 00:37:25.715239 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.32s 2026-02-02 00:37:25.715258 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.31s 2026-02-02 00:37:25.715277 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.30s 2026-02-02 00:37:25.715295 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.29s 2026-02-02 00:37:25.715337 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.24s 2026-02-02 00:37:25.715360 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.19s 2026-02-02 00:37:26.069420 | orchestrator | + osism apply wireguard 2026-02-02 00:37:38.227707 | orchestrator | 2026-02-02 00:37:38 | INFO  | Prepare task for execution of wireguard. 2026-02-02 00:37:38.308431 | orchestrator | 2026-02-02 00:37:38 | INFO  | Task ce7895d6-cf82-4d84-b185-2be8fee176e2 (wireguard) was prepared for execution. 2026-02-02 00:37:38.308542 | orchestrator | 2026-02-02 00:37:38 | INFO  | It takes a moment until task ce7895d6-cf82-4d84-b185-2be8fee176e2 (wireguard) has been started and output is visible here. 2026-02-02 00:37:59.243527 | orchestrator | 2026-02-02 00:37:59.243639 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-02 00:37:59.243657 | orchestrator | 2026-02-02 00:37:59.243669 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-02 00:37:59.243681 | orchestrator | Monday 02 February 2026 00:37:42 +0000 (0:00:00.225) 0:00:00.225 ******* 2026-02-02 00:37:59.243692 | orchestrator | ok: [testbed-manager] 2026-02-02 00:37:59.243704 | orchestrator | 2026-02-02 00:37:59.243715 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-02 00:37:59.243726 | orchestrator | Monday 02 February 2026 00:37:44 +0000 (0:00:01.621) 0:00:01.847 ******* 2026-02-02 00:37:59.243737 | orchestrator | changed: [testbed-manager] 2026-02-02 00:37:59.243749 | orchestrator | 2026-02-02 00:37:59.243760 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-02 00:37:59.243771 | orchestrator | Monday 02 February 2026 00:37:51 +0000 (0:00:07.070) 0:00:08.918 ******* 2026-02-02 00:37:59.243782 | orchestrator | changed: [testbed-manager] 2026-02-02 00:37:59.243793 | orchestrator | 2026-02-02 00:37:59.243804 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-02 00:37:59.243815 | orchestrator | Monday 02 February 2026 00:37:51 +0000 (0:00:00.574) 0:00:09.493 ******* 2026-02-02 00:37:59.243825 | orchestrator | changed: [testbed-manager] 2026-02-02 00:37:59.243836 | orchestrator | 2026-02-02 00:37:59.243847 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-02 00:37:59.243858 | orchestrator | Monday 02 February 2026 00:37:52 +0000 (0:00:00.460) 0:00:09.954 ******* 2026-02-02 00:37:59.243869 | orchestrator | ok: [testbed-manager] 2026-02-02 00:37:59.243879 | orchestrator | 2026-02-02 00:37:59.243890 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-02 00:37:59.243901 | orchestrator | Monday 02 February 2026 00:37:53 +0000 (0:00:00.687) 0:00:10.642 ******* 2026-02-02 00:37:59.243912 | orchestrator | ok: [testbed-manager] 2026-02-02 00:37:59.243923 | orchestrator | 2026-02-02 00:37:59.243934 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-02 00:37:59.243944 | orchestrator | Monday 02 February 2026 00:37:53 +0000 (0:00:00.418) 0:00:11.061 ******* 2026-02-02 00:37:59.243955 | orchestrator | ok: [testbed-manager] 2026-02-02 00:37:59.244000 | orchestrator | 2026-02-02 00:37:59.244021 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-02 00:37:59.244042 | orchestrator | Monday 02 February 2026 00:37:53 +0000 (0:00:00.433) 0:00:11.494 ******* 2026-02-02 00:37:59.244061 | orchestrator | changed: [testbed-manager] 2026-02-02 00:37:59.244081 | orchestrator | 2026-02-02 00:37:59.244100 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-02 00:37:59.244118 | orchestrator | Monday 02 February 2026 00:37:55 +0000 (0:00:01.241) 0:00:12.736 ******* 2026-02-02 00:37:59.244136 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-02 00:37:59.244154 | orchestrator | changed: [testbed-manager] 2026-02-02 00:37:59.244172 | orchestrator | 2026-02-02 00:37:59.244192 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-02 00:37:59.244208 | orchestrator | Monday 02 February 2026 00:37:56 +0000 (0:00:00.958) 0:00:13.694 ******* 2026-02-02 00:37:59.244226 | orchestrator | changed: [testbed-manager] 2026-02-02 00:37:59.244243 | orchestrator | 2026-02-02 00:37:59.244261 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-02 00:37:59.244315 | orchestrator | Monday 02 February 2026 00:37:57 +0000 (0:00:01.721) 0:00:15.416 ******* 2026-02-02 00:37:59.244333 | orchestrator | changed: [testbed-manager] 2026-02-02 00:37:59.244349 | orchestrator | 2026-02-02 00:37:59.244366 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:37:59.244384 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:37:59.244404 | orchestrator | 2026-02-02 00:37:59.244423 | orchestrator | 2026-02-02 00:37:59.244441 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:37:59.244459 | orchestrator | Monday 02 February 2026 00:37:58 +0000 (0:00:00.949) 0:00:16.365 ******* 2026-02-02 00:37:59.244476 | orchestrator | =============================================================================== 2026-02-02 00:37:59.244494 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.07s 2026-02-02 00:37:59.244512 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.72s 2026-02-02 00:37:59.244529 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.62s 2026-02-02 00:37:59.244548 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.24s 2026-02-02 00:37:59.244566 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2026-02-02 00:37:59.244584 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2026-02-02 00:37:59.244603 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.69s 2026-02-02 00:37:59.244621 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2026-02-02 00:37:59.244640 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2026-02-02 00:37:59.244660 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2026-02-02 00:37:59.244678 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2026-02-02 00:37:59.607716 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-02 00:37:59.650289 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-02 00:37:59.650404 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-02 00:37:59.724018 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 189 0 --:--:-- --:--:-- --:--:-- 191 2026-02-02 00:37:59.737552 | orchestrator | + osism apply --environment custom workarounds 2026-02-02 00:38:01.808328 | orchestrator | 2026-02-02 00:38:01 | INFO  | Trying to run play workarounds in environment custom 2026-02-02 00:38:11.849429 | orchestrator | 2026-02-02 00:38:11 | INFO  | Prepare task for execution of workarounds. 2026-02-02 00:38:11.922361 | orchestrator | 2026-02-02 00:38:11 | INFO  | Task a0b841e6-e480-47af-b014-ed97b2636350 (workarounds) was prepared for execution. 2026-02-02 00:38:11.922446 | orchestrator | 2026-02-02 00:38:11 | INFO  | It takes a moment until task a0b841e6-e480-47af-b014-ed97b2636350 (workarounds) has been started and output is visible here. 2026-02-02 00:38:37.655984 | orchestrator | 2026-02-02 00:38:37.656098 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 00:38:37.656115 | orchestrator | 2026-02-02 00:38:37.656127 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-02 00:38:37.656139 | orchestrator | Monday 02 February 2026 00:38:16 +0000 (0:00:00.134) 0:00:00.134 ******* 2026-02-02 00:38:37.656151 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-02 00:38:37.656163 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-02 00:38:37.656173 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-02 00:38:37.656185 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-02 00:38:37.656218 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-02 00:38:37.656229 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-02 00:38:37.656240 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-02 00:38:37.656250 | orchestrator | 2026-02-02 00:38:37.656261 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-02 00:38:37.656272 | orchestrator | 2026-02-02 00:38:37.656282 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-02 00:38:37.656293 | orchestrator | Monday 02 February 2026 00:38:17 +0000 (0:00:00.815) 0:00:00.949 ******* 2026-02-02 00:38:37.656304 | orchestrator | ok: [testbed-manager] 2026-02-02 00:38:37.656316 | orchestrator | 2026-02-02 00:38:37.656327 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-02 00:38:37.656338 | orchestrator | 2026-02-02 00:38:37.656349 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-02 00:38:37.656359 | orchestrator | Monday 02 February 2026 00:38:19 +0000 (0:00:02.402) 0:00:03.352 ******* 2026-02-02 00:38:37.656370 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:38:37.656382 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:38:37.656393 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:38:37.656403 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:38:37.656414 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:38:37.656425 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:38:37.656435 | orchestrator | 2026-02-02 00:38:37.656446 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-02 00:38:37.656457 | orchestrator | 2026-02-02 00:38:37.656468 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-02 00:38:37.656482 | orchestrator | Monday 02 February 2026 00:38:21 +0000 (0:00:01.906) 0:00:05.259 ******* 2026-02-02 00:38:37.656496 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-02 00:38:37.656509 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-02 00:38:37.656522 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-02 00:38:37.656535 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-02 00:38:37.656548 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-02 00:38:37.656560 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-02 00:38:37.656573 | orchestrator | 2026-02-02 00:38:37.656586 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-02 00:38:37.656599 | orchestrator | Monday 02 February 2026 00:38:23 +0000 (0:00:01.549) 0:00:06.808 ******* 2026-02-02 00:38:37.656612 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:38:37.656625 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:38:37.656638 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:38:37.656650 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:38:37.656663 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:38:37.656675 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:38:37.656687 | orchestrator | 2026-02-02 00:38:37.656700 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-02 00:38:37.656712 | orchestrator | Monday 02 February 2026 00:38:26 +0000 (0:00:03.863) 0:00:10.671 ******* 2026-02-02 00:38:37.656734 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:38:37.656747 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:38:37.656760 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:38:37.656772 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:38:37.656784 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:38:37.656797 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:38:37.656818 | orchestrator | 2026-02-02 00:38:37.656831 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-02 00:38:37.656844 | orchestrator | 2026-02-02 00:38:37.656856 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-02 00:38:37.656866 | orchestrator | Monday 02 February 2026 00:38:27 +0000 (0:00:00.770) 0:00:11.442 ******* 2026-02-02 00:38:37.656877 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:38:37.656888 | orchestrator | changed: [testbed-manager] 2026-02-02 00:38:37.656899 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:38:37.656909 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:38:37.656920 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:38:37.656930 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:38:37.656963 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:38:37.656977 | orchestrator | 2026-02-02 00:38:37.656988 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-02 00:38:37.656999 | orchestrator | Monday 02 February 2026 00:38:29 +0000 (0:00:01.546) 0:00:12.988 ******* 2026-02-02 00:38:37.657010 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:38:37.657020 | orchestrator | changed: [testbed-manager] 2026-02-02 00:38:37.657031 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:38:37.657042 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:38:37.657053 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:38:37.657064 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:38:37.657092 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:38:37.657104 | orchestrator | 2026-02-02 00:38:37.657115 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-02 00:38:37.657126 | orchestrator | Monday 02 February 2026 00:38:30 +0000 (0:00:01.507) 0:00:14.496 ******* 2026-02-02 00:38:37.657137 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:38:37.657147 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:38:37.657158 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:38:37.657169 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:38:37.657180 | orchestrator | ok: [testbed-manager] 2026-02-02 00:38:37.657191 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:38:37.657202 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:38:37.657212 | orchestrator | 2026-02-02 00:38:37.657223 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-02 00:38:37.657234 | orchestrator | Monday 02 February 2026 00:38:32 +0000 (0:00:01.477) 0:00:15.973 ******* 2026-02-02 00:38:37.657245 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:38:37.657256 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:38:37.657267 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:38:37.657278 | orchestrator | changed: [testbed-manager] 2026-02-02 00:38:37.657289 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:38:37.657299 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:38:37.657310 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:38:37.657321 | orchestrator | 2026-02-02 00:38:37.657332 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-02 00:38:37.657343 | orchestrator | Monday 02 February 2026 00:38:33 +0000 (0:00:01.787) 0:00:17.761 ******* 2026-02-02 00:38:37.657354 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:38:37.657364 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:38:37.657375 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:38:37.657386 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:38:37.657396 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:38:37.657407 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:38:37.657417 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:38:37.657428 | orchestrator | 2026-02-02 00:38:37.657439 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-02 00:38:37.657450 | orchestrator | 2026-02-02 00:38:37.657461 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-02 00:38:37.657471 | orchestrator | Monday 02 February 2026 00:38:34 +0000 (0:00:00.663) 0:00:18.425 ******* 2026-02-02 00:38:37.657482 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:38:37.657499 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:38:37.657510 | orchestrator | ok: [testbed-manager] 2026-02-02 00:38:37.657521 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:38:37.657532 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:38:37.657542 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:38:37.657553 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:38:37.657564 | orchestrator | 2026-02-02 00:38:37.657575 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:38:37.657588 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:38:37.657600 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:38:37.657612 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:38:37.657630 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:38:37.657648 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:38:37.657666 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:38:37.657684 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:38:37.657695 | orchestrator | 2026-02-02 00:38:37.657706 | orchestrator | 2026-02-02 00:38:37.657722 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:38:37.657733 | orchestrator | Monday 02 February 2026 00:38:37 +0000 (0:00:02.968) 0:00:21.393 ******* 2026-02-02 00:38:37.657744 | orchestrator | =============================================================================== 2026-02-02 00:38:37.657755 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.86s 2026-02-02 00:38:37.657766 | orchestrator | Install python3-docker -------------------------------------------------- 2.97s 2026-02-02 00:38:37.657777 | orchestrator | Apply netplan configuration --------------------------------------------- 2.40s 2026-02-02 00:38:37.657788 | orchestrator | Apply netplan configuration --------------------------------------------- 1.91s 2026-02-02 00:38:37.657798 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.79s 2026-02-02 00:38:37.657809 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.55s 2026-02-02 00:38:37.657820 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.55s 2026-02-02 00:38:37.657830 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.51s 2026-02-02 00:38:37.657841 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.48s 2026-02-02 00:38:37.657852 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.82s 2026-02-02 00:38:37.657863 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.77s 2026-02-02 00:38:37.657881 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2026-02-02 00:38:38.393056 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-02 00:38:50.475079 | orchestrator | 2026-02-02 00:38:50 | INFO  | Prepare task for execution of reboot. 2026-02-02 00:38:50.553188 | orchestrator | 2026-02-02 00:38:50 | INFO  | Task c65c0d96-2847-4fc5-9d03-2870917530b0 (reboot) was prepared for execution. 2026-02-02 00:38:50.553231 | orchestrator | 2026-02-02 00:38:50 | INFO  | It takes a moment until task c65c0d96-2847-4fc5-9d03-2870917530b0 (reboot) has been started and output is visible here. 2026-02-02 00:39:01.333879 | orchestrator | 2026-02-02 00:39:01.333993 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-02 00:39:01.334004 | orchestrator | 2026-02-02 00:39:01.334011 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-02 00:39:01.334060 | orchestrator | Monday 02 February 2026 00:38:54 +0000 (0:00:00.210) 0:00:00.210 ******* 2026-02-02 00:39:01.334067 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:39:01.334075 | orchestrator | 2026-02-02 00:39:01.334081 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-02 00:39:01.334088 | orchestrator | Monday 02 February 2026 00:38:55 +0000 (0:00:00.110) 0:00:00.320 ******* 2026-02-02 00:39:01.334095 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:39:01.334101 | orchestrator | 2026-02-02 00:39:01.334108 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-02 00:39:01.334114 | orchestrator | Monday 02 February 2026 00:38:56 +0000 (0:00:01.016) 0:00:01.337 ******* 2026-02-02 00:39:01.334120 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:39:01.334126 | orchestrator | 2026-02-02 00:39:01.334132 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-02 00:39:01.334138 | orchestrator | 2026-02-02 00:39:01.334144 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-02 00:39:01.334151 | orchestrator | Monday 02 February 2026 00:38:56 +0000 (0:00:00.114) 0:00:01.451 ******* 2026-02-02 00:39:01.334157 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:39:01.334163 | orchestrator | 2026-02-02 00:39:01.334169 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-02 00:39:01.334174 | orchestrator | Monday 02 February 2026 00:38:56 +0000 (0:00:00.112) 0:00:01.564 ******* 2026-02-02 00:39:01.334181 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:39:01.334187 | orchestrator | 2026-02-02 00:39:01.334194 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-02 00:39:01.334200 | orchestrator | Monday 02 February 2026 00:38:57 +0000 (0:00:00.721) 0:00:02.285 ******* 2026-02-02 00:39:01.334206 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:39:01.334212 | orchestrator | 2026-02-02 00:39:01.334218 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-02 00:39:01.334224 | orchestrator | 2026-02-02 00:39:01.334230 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-02 00:39:01.334236 | orchestrator | Monday 02 February 2026 00:38:57 +0000 (0:00:00.112) 0:00:02.398 ******* 2026-02-02 00:39:01.334242 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:39:01.334248 | orchestrator | 2026-02-02 00:39:01.334254 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-02 00:39:01.334260 | orchestrator | Monday 02 February 2026 00:38:57 +0000 (0:00:00.233) 0:00:02.631 ******* 2026-02-02 00:39:01.334266 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:39:01.334273 | orchestrator | 2026-02-02 00:39:01.334279 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-02 00:39:01.334285 | orchestrator | Monday 02 February 2026 00:38:58 +0000 (0:00:00.704) 0:00:03.336 ******* 2026-02-02 00:39:01.334291 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:39:01.334297 | orchestrator | 2026-02-02 00:39:01.334303 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-02 00:39:01.334309 | orchestrator | 2026-02-02 00:39:01.334315 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-02 00:39:01.334321 | orchestrator | Monday 02 February 2026 00:38:58 +0000 (0:00:00.116) 0:00:03.452 ******* 2026-02-02 00:39:01.334327 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:39:01.334333 | orchestrator | 2026-02-02 00:39:01.334339 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-02 00:39:01.334359 | orchestrator | Monday 02 February 2026 00:38:58 +0000 (0:00:00.113) 0:00:03.566 ******* 2026-02-02 00:39:01.334366 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:39:01.334393 | orchestrator | 2026-02-02 00:39:01.334400 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-02 00:39:01.334406 | orchestrator | Monday 02 February 2026 00:38:59 +0000 (0:00:00.674) 0:00:04.240 ******* 2026-02-02 00:39:01.334412 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:39:01.334419 | orchestrator | 2026-02-02 00:39:01.334425 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-02 00:39:01.334431 | orchestrator | 2026-02-02 00:39:01.334437 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-02 00:39:01.334443 | orchestrator | Monday 02 February 2026 00:38:59 +0000 (0:00:00.122) 0:00:04.363 ******* 2026-02-02 00:39:01.334449 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:39:01.334456 | orchestrator | 2026-02-02 00:39:01.334462 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-02 00:39:01.334468 | orchestrator | Monday 02 February 2026 00:38:59 +0000 (0:00:00.108) 0:00:04.471 ******* 2026-02-02 00:39:01.334474 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:39:01.334481 | orchestrator | 2026-02-02 00:39:01.334487 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-02 00:39:01.334493 | orchestrator | Monday 02 February 2026 00:38:59 +0000 (0:00:00.661) 0:00:05.132 ******* 2026-02-02 00:39:01.334499 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:39:01.334506 | orchestrator | 2026-02-02 00:39:01.334512 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-02 00:39:01.334518 | orchestrator | 2026-02-02 00:39:01.334524 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-02 00:39:01.334530 | orchestrator | Monday 02 February 2026 00:39:00 +0000 (0:00:00.113) 0:00:05.246 ******* 2026-02-02 00:39:01.334537 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:39:01.334543 | orchestrator | 2026-02-02 00:39:01.334550 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-02 00:39:01.334556 | orchestrator | Monday 02 February 2026 00:39:00 +0000 (0:00:00.106) 0:00:05.353 ******* 2026-02-02 00:39:01.334562 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:39:01.334568 | orchestrator | 2026-02-02 00:39:01.334574 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-02 00:39:01.334581 | orchestrator | Monday 02 February 2026 00:39:00 +0000 (0:00:00.744) 0:00:06.097 ******* 2026-02-02 00:39:01.334600 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:39:01.334606 | orchestrator | 2026-02-02 00:39:01.334612 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:39:01.334620 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:39:01.334628 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:39:01.334634 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:39:01.334641 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:39:01.334647 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:39:01.334653 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:39:01.334659 | orchestrator | 2026-02-02 00:39:01.334666 | orchestrator | 2026-02-02 00:39:01.334673 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:39:01.334679 | orchestrator | Monday 02 February 2026 00:39:00 +0000 (0:00:00.040) 0:00:06.138 ******* 2026-02-02 00:39:01.334685 | orchestrator | =============================================================================== 2026-02-02 00:39:01.334696 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.52s 2026-02-02 00:39:01.334702 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.79s 2026-02-02 00:39:01.334708 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.62s 2026-02-02 00:39:01.694726 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-02 00:39:13.812206 | orchestrator | 2026-02-02 00:39:13 | INFO  | Prepare task for execution of wait-for-connection. 2026-02-02 00:39:13.888899 | orchestrator | 2026-02-02 00:39:13 | INFO  | Task 9cb13c23-eaa0-41eb-932e-a3d6794641bc (wait-for-connection) was prepared for execution. 2026-02-02 00:39:13.888995 | orchestrator | 2026-02-02 00:39:13 | INFO  | It takes a moment until task 9cb13c23-eaa0-41eb-932e-a3d6794641bc (wait-for-connection) has been started and output is visible here. 2026-02-02 00:39:30.220855 | orchestrator | 2026-02-02 00:39:30.221013 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-02 00:39:30.221035 | orchestrator | 2026-02-02 00:39:30.221048 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-02 00:39:30.221060 | orchestrator | Monday 02 February 2026 00:39:18 +0000 (0:00:00.240) 0:00:00.240 ******* 2026-02-02 00:39:30.221072 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:39:30.221084 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:39:30.221095 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:39:30.221105 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:39:30.221136 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:39:30.221155 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:39:30.221173 | orchestrator | 2026-02-02 00:39:30.221191 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:39:30.221209 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:39:30.221230 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:39:30.221248 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:39:30.221265 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:39:30.221276 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:39:30.221287 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:39:30.221298 | orchestrator | 2026-02-02 00:39:30.221308 | orchestrator | 2026-02-02 00:39:30.221319 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:39:30.221330 | orchestrator | Monday 02 February 2026 00:39:29 +0000 (0:00:11.575) 0:00:11.816 ******* 2026-02-02 00:39:30.221341 | orchestrator | =============================================================================== 2026-02-02 00:39:30.221352 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.58s 2026-02-02 00:39:30.608088 | orchestrator | + osism apply hddtemp 2026-02-02 00:39:42.862247 | orchestrator | 2026-02-02 00:39:42 | INFO  | Prepare task for execution of hddtemp. 2026-02-02 00:39:42.934114 | orchestrator | 2026-02-02 00:39:42 | INFO  | Task f34c6da4-697d-4249-985d-242f41c9f5b5 (hddtemp) was prepared for execution. 2026-02-02 00:39:42.934203 | orchestrator | 2026-02-02 00:39:42 | INFO  | It takes a moment until task f34c6da4-697d-4249-985d-242f41c9f5b5 (hddtemp) has been started and output is visible here. 2026-02-02 00:40:12.553115 | orchestrator | 2026-02-02 00:40:12.553232 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-02 00:40:12.553276 | orchestrator | 2026-02-02 00:40:12.553290 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-02 00:40:12.553302 | orchestrator | Monday 02 February 2026 00:39:47 +0000 (0:00:00.276) 0:00:00.276 ******* 2026-02-02 00:40:12.553314 | orchestrator | ok: [testbed-manager] 2026-02-02 00:40:12.553327 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:40:12.553340 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:40:12.553352 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:40:12.553364 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:40:12.553376 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:40:12.553388 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:40:12.553401 | orchestrator | 2026-02-02 00:40:12.553413 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-02 00:40:12.553426 | orchestrator | Monday 02 February 2026 00:39:48 +0000 (0:00:00.709) 0:00:00.985 ******* 2026-02-02 00:40:12.553441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:40:12.553455 | orchestrator | 2026-02-02 00:40:12.553468 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-02 00:40:12.553480 | orchestrator | Monday 02 February 2026 00:39:49 +0000 (0:00:01.293) 0:00:02.279 ******* 2026-02-02 00:40:12.553493 | orchestrator | ok: [testbed-manager] 2026-02-02 00:40:12.553506 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:40:12.553518 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:40:12.553531 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:40:12.553543 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:40:12.553556 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:40:12.553568 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:40:12.553580 | orchestrator | 2026-02-02 00:40:12.553593 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-02 00:40:12.553606 | orchestrator | Monday 02 February 2026 00:39:51 +0000 (0:00:02.191) 0:00:04.471 ******* 2026-02-02 00:40:12.553620 | orchestrator | changed: [testbed-manager] 2026-02-02 00:40:12.553633 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:40:12.553645 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:40:12.553657 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:40:12.553668 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:40:12.553680 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:40:12.553690 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:40:12.553702 | orchestrator | 2026-02-02 00:40:12.553714 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-02 00:40:12.553726 | orchestrator | Monday 02 February 2026 00:39:52 +0000 (0:00:01.233) 0:00:05.705 ******* 2026-02-02 00:40:12.553739 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:40:12.553751 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:40:12.553764 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:40:12.553776 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:40:12.553788 | orchestrator | ok: [testbed-manager] 2026-02-02 00:40:12.553801 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:40:12.553813 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:40:12.553827 | orchestrator | 2026-02-02 00:40:12.553840 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-02 00:40:12.553854 | orchestrator | Monday 02 February 2026 00:39:54 +0000 (0:00:01.202) 0:00:06.907 ******* 2026-02-02 00:40:12.553869 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:40:12.553883 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:40:12.553940 | orchestrator | changed: [testbed-manager] 2026-02-02 00:40:12.553957 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:40:12.553972 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:40:12.553988 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:40:12.554004 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:40:12.554073 | orchestrator | 2026-02-02 00:40:12.554087 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-02 00:40:12.554113 | orchestrator | Monday 02 February 2026 00:39:54 +0000 (0:00:00.892) 0:00:07.800 ******* 2026-02-02 00:40:12.554126 | orchestrator | changed: [testbed-manager] 2026-02-02 00:40:12.554139 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:40:12.554151 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:40:12.554162 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:40:12.554173 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:40:12.554184 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:40:12.554196 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:40:12.554209 | orchestrator | 2026-02-02 00:40:12.554221 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-02 00:40:12.554233 | orchestrator | Monday 02 February 2026 00:40:08 +0000 (0:00:13.994) 0:00:21.795 ******* 2026-02-02 00:40:12.554245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:40:12.554259 | orchestrator | 2026-02-02 00:40:12.554272 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-02 00:40:12.554284 | orchestrator | Monday 02 February 2026 00:40:10 +0000 (0:00:01.266) 0:00:23.061 ******* 2026-02-02 00:40:12.554297 | orchestrator | changed: [testbed-manager] 2026-02-02 00:40:12.554307 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:40:12.554318 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:40:12.554330 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:40:12.554342 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:40:12.554354 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:40:12.554366 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:40:12.554378 | orchestrator | 2026-02-02 00:40:12.554391 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:40:12.554404 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:40:12.554440 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:40:12.554454 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:40:12.554467 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:40:12.554480 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:40:12.554492 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:40:12.554503 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:40:12.554516 | orchestrator | 2026-02-02 00:40:12.554528 | orchestrator | 2026-02-02 00:40:12.554542 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:40:12.554555 | orchestrator | Monday 02 February 2026 00:40:12 +0000 (0:00:01.934) 0:00:24.996 ******* 2026-02-02 00:40:12.554569 | orchestrator | =============================================================================== 2026-02-02 00:40:12.554580 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.99s 2026-02-02 00:40:12.554592 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.19s 2026-02-02 00:40:12.554605 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.93s 2026-02-02 00:40:12.554618 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.29s 2026-02-02 00:40:12.554641 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.27s 2026-02-02 00:40:12.554654 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.23s 2026-02-02 00:40:12.554666 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.20s 2026-02-02 00:40:12.554678 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.89s 2026-02-02 00:40:12.554690 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.71s 2026-02-02 00:40:12.879848 | orchestrator | ++ semver latest 7.1.1 2026-02-02 00:40:12.938834 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-02 00:40:12.939007 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-02 00:40:12.939025 | orchestrator | + sudo systemctl restart manager.service 2026-02-02 00:40:26.036507 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-02 00:40:26.036611 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-02 00:40:26.036624 | orchestrator | + local max_attempts=60 2026-02-02 00:40:26.036634 | orchestrator | + local name=ceph-ansible 2026-02-02 00:40:26.036643 | orchestrator | + local attempt_num=1 2026-02-02 00:40:26.036651 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:40:26.066257 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-02 00:40:26.066370 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 00:40:26.066388 | orchestrator | + sleep 5 2026-02-02 00:40:31.070551 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:40:31.107858 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-02 00:40:31.108046 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 00:40:31.108072 | orchestrator | + sleep 5 2026-02-02 00:40:36.111199 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:40:36.141887 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-02 00:40:36.141994 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 00:40:36.142009 | orchestrator | + sleep 5 2026-02-02 00:40:41.145895 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:40:41.181718 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-02 00:40:41.181805 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 00:40:41.181820 | orchestrator | + sleep 5 2026-02-02 00:40:46.186272 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:40:46.221131 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-02 00:40:46.221220 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 00:40:46.221233 | orchestrator | + sleep 5 2026-02-02 00:40:51.225444 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:40:51.273507 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-02 00:40:51.273600 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 00:40:51.273609 | orchestrator | + sleep 5 2026-02-02 00:40:56.278344 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:40:56.312962 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-02 00:40:56.313042 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 00:40:56.313051 | orchestrator | + sleep 5 2026-02-02 00:41:01.321476 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:41:01.363617 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-02 00:41:01.363696 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 00:41:01.363710 | orchestrator | + sleep 5 2026-02-02 00:41:06.385695 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:41:06.424085 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-02 00:41:06.424171 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 00:41:06.424186 | orchestrator | + sleep 5 2026-02-02 00:41:11.427592 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:41:11.459448 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-02 00:41:11.459502 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 00:41:11.459508 | orchestrator | + sleep 5 2026-02-02 00:41:16.464392 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:41:16.493699 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-02 00:41:16.493771 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 00:41:16.493799 | orchestrator | + sleep 5 2026-02-02 00:41:21.498487 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:41:21.538687 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-02 00:41:21.538782 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 00:41:21.538797 | orchestrator | + sleep 5 2026-02-02 00:41:26.543103 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:41:26.581298 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-02 00:41:26.581404 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 00:41:26.581420 | orchestrator | + sleep 5 2026-02-02 00:41:31.587312 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 00:41:31.625242 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-02 00:41:31.625367 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-02 00:41:31.625385 | orchestrator | + local max_attempts=60 2026-02-02 00:41:31.625397 | orchestrator | + local name=kolla-ansible 2026-02-02 00:41:31.625408 | orchestrator | + local attempt_num=1 2026-02-02 00:41:31.625419 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-02 00:41:31.655764 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-02 00:41:31.655859 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-02 00:41:31.655881 | orchestrator | + local max_attempts=60 2026-02-02 00:41:31.655901 | orchestrator | + local name=osism-ansible 2026-02-02 00:41:31.655950 | orchestrator | + local attempt_num=1 2026-02-02 00:41:31.656134 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-02 00:41:31.699200 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-02 00:41:31.699317 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-02 00:41:31.699345 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-02 00:41:31.859108 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-02 00:41:32.002571 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-02 00:41:32.159107 | orchestrator | ARA in osism-ansible already disabled. 2026-02-02 00:41:32.320451 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-02 00:41:32.321219 | orchestrator | + osism apply gather-facts 2026-02-02 00:41:44.449116 | orchestrator | 2026-02-02 00:41:44 | INFO  | Prepare task for execution of gather-facts. 2026-02-02 00:41:44.521818 | orchestrator | 2026-02-02 00:41:44 | INFO  | Task 004a73f6-4171-4361-9255-e0ce2d9cbe46 (gather-facts) was prepared for execution. 2026-02-02 00:41:44.521910 | orchestrator | 2026-02-02 00:41:44 | INFO  | It takes a moment until task 004a73f6-4171-4361-9255-e0ce2d9cbe46 (gather-facts) has been started and output is visible here. 2026-02-02 00:41:58.361176 | orchestrator | 2026-02-02 00:41:58.361235 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-02 00:41:58.361245 | orchestrator | 2026-02-02 00:41:58.361253 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-02 00:41:58.361261 | orchestrator | Monday 02 February 2026 00:41:48 +0000 (0:00:00.221) 0:00:00.221 ******* 2026-02-02 00:41:58.361268 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:41:58.361276 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:41:58.361282 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:41:58.361286 | orchestrator | ok: [testbed-manager] 2026-02-02 00:41:58.361290 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:41:58.361294 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:41:58.361298 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:41:58.361302 | orchestrator | 2026-02-02 00:41:58.361306 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-02 00:41:58.361310 | orchestrator | 2026-02-02 00:41:58.361314 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-02 00:41:58.361318 | orchestrator | Monday 02 February 2026 00:41:57 +0000 (0:00:08.609) 0:00:08.830 ******* 2026-02-02 00:41:58.361322 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:41:58.361327 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:41:58.361331 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:41:58.361335 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:41:58.361339 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:41:58.361361 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:41:58.361369 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:41:58.361375 | orchestrator | 2026-02-02 00:41:58.361381 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:41:58.361388 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:41:58.361396 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:41:58.361403 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:41:58.361410 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:41:58.361417 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:41:58.361423 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:41:58.361431 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:41:58.361435 | orchestrator | 2026-02-02 00:41:58.361439 | orchestrator | 2026-02-02 00:41:58.361443 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:41:58.361447 | orchestrator | Monday 02 February 2026 00:41:58 +0000 (0:00:00.493) 0:00:09.324 ******* 2026-02-02 00:41:58.361451 | orchestrator | =============================================================================== 2026-02-02 00:41:58.361455 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.61s 2026-02-02 00:41:58.361459 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2026-02-02 00:41:58.591673 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-02 00:41:58.608920 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-02 00:41:58.620915 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-02 00:41:58.633142 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-02 00:41:58.645723 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-02 00:41:58.658905 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-02 00:41:58.671258 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-02 00:41:58.686365 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-02 00:41:58.701271 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-02 00:41:58.717206 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-02 00:41:58.729447 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-02 00:41:58.747506 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-02 00:41:58.769271 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-02 00:41:58.783025 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-02 00:41:58.794504 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-02 00:41:58.810801 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-02 00:41:58.828589 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-02 00:41:58.847668 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-02 00:41:58.865428 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-02 00:41:58.876416 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-02 00:41:58.893915 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-02 00:41:58.907070 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-02 00:41:58.929630 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-02 00:41:58.943887 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-02 00:41:59.424320 | orchestrator | ok: Runtime: 0:24:59.878122 2026-02-02 00:41:59.524435 | 2026-02-02 00:41:59.524576 | TASK [Deploy services] 2026-02-02 00:42:00.058607 | orchestrator | skipping: Conditional result was False 2026-02-02 00:42:00.076496 | 2026-02-02 00:42:00.076670 | TASK [Deploy in a nutshell] 2026-02-02 00:42:00.763760 | orchestrator | + set -e 2026-02-02 00:42:00.763907 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 00:42:00.763921 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 00:42:00.763949 | orchestrator | ++ INTERACTIVE=false 2026-02-02 00:42:00.763957 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 00:42:00.763964 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 00:42:00.763972 | orchestrator | + source /opt/manager-vars.sh 2026-02-02 00:42:00.763998 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-02 00:42:00.764014 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-02 00:42:00.764021 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-02 00:42:00.765181 | orchestrator | 2026-02-02 00:42:00.765214 | orchestrator | # PULL IMAGES 2026-02-02 00:42:00.765222 | orchestrator | 2026-02-02 00:42:00.765228 | orchestrator | ++ CEPH_VERSION=reef 2026-02-02 00:42:00.765238 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-02 00:42:00.765245 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-02 00:42:00.765257 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-02 00:42:00.765263 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-02 00:42:00.765271 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-02-02 00:42:00.765277 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-02-02 00:42:00.765283 | orchestrator | ++ export ARA=false 2026-02-02 00:42:00.765288 | orchestrator | ++ ARA=false 2026-02-02 00:42:00.765295 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-02 00:42:00.765300 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-02 00:42:00.765306 | orchestrator | ++ export TEMPEST=true 2026-02-02 00:42:00.765311 | orchestrator | ++ TEMPEST=true 2026-02-02 00:42:00.765317 | orchestrator | ++ export IS_ZUUL=true 2026-02-02 00:42:00.765322 | orchestrator | ++ IS_ZUUL=true 2026-02-02 00:42:00.765327 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.61 2026-02-02 00:42:00.765333 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.61 2026-02-02 00:42:00.765339 | orchestrator | ++ export EXTERNAL_API=false 2026-02-02 00:42:00.765344 | orchestrator | ++ EXTERNAL_API=false 2026-02-02 00:42:00.765349 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-02 00:42:00.765355 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-02 00:42:00.765361 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-02 00:42:00.765366 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-02 00:42:00.765372 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-02 00:42:00.765386 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-02 00:42:00.765395 | orchestrator | + echo 2026-02-02 00:42:00.765404 | orchestrator | + echo '# PULL IMAGES' 2026-02-02 00:42:00.765412 | orchestrator | + echo 2026-02-02 00:42:00.765571 | orchestrator | ++ semver latest 7.0.0 2026-02-02 00:42:00.829598 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-02 00:42:00.829708 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-02 00:42:00.829726 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-02 00:42:02.932865 | orchestrator | 2026-02-02 00:42:02 | INFO  | Trying to run play pull-images in environment custom 2026-02-02 00:42:12.971553 | orchestrator | 2026-02-02 00:42:12 | INFO  | Prepare task for execution of pull-images. 2026-02-02 00:42:13.056269 | orchestrator | 2026-02-02 00:42:13 | INFO  | Task 5933e4dd-70be-466e-8e59-aa03d51e809c (pull-images) was prepared for execution. 2026-02-02 00:42:13.056375 | orchestrator | 2026-02-02 00:42:13 | INFO  | Task 5933e4dd-70be-466e-8e59-aa03d51e809c is running in background. No more output. Check ARA for logs. 2026-02-02 00:42:15.628549 | orchestrator | 2026-02-02 00:42:15 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-02 00:42:25.672898 | orchestrator | 2026-02-02 00:42:25 | INFO  | Prepare task for execution of wipe-partitions. 2026-02-02 00:42:25.744064 | orchestrator | 2026-02-02 00:42:25 | INFO  | Task 3e1b9f75-ddfc-4f62-9dd9-55f68ac45a97 (wipe-partitions) was prepared for execution. 2026-02-02 00:42:25.744176 | orchestrator | 2026-02-02 00:42:25 | INFO  | It takes a moment until task 3e1b9f75-ddfc-4f62-9dd9-55f68ac45a97 (wipe-partitions) has been started and output is visible here. 2026-02-02 00:42:38.809466 | orchestrator | 2026-02-02 00:42:38.809579 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-02 00:42:38.809597 | orchestrator | 2026-02-02 00:42:38.809609 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-02 00:42:38.809625 | orchestrator | Monday 02 February 2026 00:42:30 +0000 (0:00:00.135) 0:00:00.135 ******* 2026-02-02 00:42:38.809663 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:42:38.809676 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:42:38.809686 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:42:38.809698 | orchestrator | 2026-02-02 00:42:38.809709 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-02 00:42:38.809720 | orchestrator | Monday 02 February 2026 00:42:30 +0000 (0:00:00.590) 0:00:00.726 ******* 2026-02-02 00:42:38.809740 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:42:38.809751 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:42:38.809762 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:42:38.809772 | orchestrator | 2026-02-02 00:42:38.809860 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-02 00:42:38.809869 | orchestrator | Monday 02 February 2026 00:42:31 +0000 (0:00:00.386) 0:00:01.113 ******* 2026-02-02 00:42:38.809875 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:42:38.809883 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:42:38.809893 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:42:38.809903 | orchestrator | 2026-02-02 00:42:38.809993 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-02 00:42:38.810090 | orchestrator | Monday 02 February 2026 00:42:31 +0000 (0:00:00.598) 0:00:01.711 ******* 2026-02-02 00:42:38.810105 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:42:38.810116 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:42:38.810126 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:42:38.810137 | orchestrator | 2026-02-02 00:42:38.810149 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-02 00:42:38.810160 | orchestrator | Monday 02 February 2026 00:42:32 +0000 (0:00:00.256) 0:00:01.967 ******* 2026-02-02 00:42:38.810170 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-02 00:42:38.810185 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-02 00:42:38.810196 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-02 00:42:38.810209 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-02 00:42:38.810220 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-02 00:42:38.810231 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-02 00:42:38.810242 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-02 00:42:38.810253 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-02 00:42:38.810265 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-02 00:42:38.810277 | orchestrator | 2026-02-02 00:42:38.810289 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-02 00:42:38.810301 | orchestrator | Monday 02 February 2026 00:42:33 +0000 (0:00:01.240) 0:00:03.208 ******* 2026-02-02 00:42:38.810312 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-02 00:42:38.810324 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-02 00:42:38.810335 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-02 00:42:38.810347 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-02 00:42:38.810356 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-02 00:42:38.810362 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-02 00:42:38.810368 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-02 00:42:38.810375 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-02 00:42:38.810381 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-02 00:42:38.810387 | orchestrator | 2026-02-02 00:42:38.810401 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-02 00:42:38.810408 | orchestrator | Monday 02 February 2026 00:42:34 +0000 (0:00:01.581) 0:00:04.789 ******* 2026-02-02 00:42:38.810414 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-02 00:42:38.810420 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-02 00:42:38.810426 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-02 00:42:38.810432 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-02 00:42:38.810449 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-02 00:42:38.810455 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-02 00:42:38.810461 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-02 00:42:38.810468 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-02 00:42:38.810474 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-02 00:42:38.810480 | orchestrator | 2026-02-02 00:42:38.810486 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-02 00:42:38.810493 | orchestrator | Monday 02 February 2026 00:42:37 +0000 (0:00:02.220) 0:00:07.010 ******* 2026-02-02 00:42:38.810499 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:42:38.810505 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:42:38.810511 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:42:38.810518 | orchestrator | 2026-02-02 00:42:38.810524 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-02 00:42:38.810530 | orchestrator | Monday 02 February 2026 00:42:37 +0000 (0:00:00.692) 0:00:07.703 ******* 2026-02-02 00:42:38.810537 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:42:38.810543 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:42:38.810549 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:42:38.810556 | orchestrator | 2026-02-02 00:42:38.810563 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:42:38.810570 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:42:38.810579 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:42:38.810604 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:42:38.810610 | orchestrator | 2026-02-02 00:42:38.810617 | orchestrator | 2026-02-02 00:42:38.810623 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:42:38.810630 | orchestrator | Monday 02 February 2026 00:42:38 +0000 (0:00:00.631) 0:00:08.335 ******* 2026-02-02 00:42:38.810636 | orchestrator | =============================================================================== 2026-02-02 00:42:38.810642 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.22s 2026-02-02 00:42:38.810648 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.58s 2026-02-02 00:42:38.810655 | orchestrator | Check device availability ----------------------------------------------- 1.24s 2026-02-02 00:42:38.810661 | orchestrator | Reload udev rules ------------------------------------------------------- 0.69s 2026-02-02 00:42:38.810667 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2026-02-02 00:42:38.810673 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.60s 2026-02-02 00:42:38.810680 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-02-02 00:42:38.810686 | orchestrator | Remove all rook related logical devices --------------------------------- 0.39s 2026-02-02 00:42:38.810692 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-02-02 00:42:51.301498 | orchestrator | 2026-02-02 00:42:51 | INFO  | Prepare task for execution of facts. 2026-02-02 00:42:51.382980 | orchestrator | 2026-02-02 00:42:51 | INFO  | Task 342c9540-f135-400c-9bf6-1ad637005153 (facts) was prepared for execution. 2026-02-02 00:42:51.383080 | orchestrator | 2026-02-02 00:42:51 | INFO  | It takes a moment until task 342c9540-f135-400c-9bf6-1ad637005153 (facts) has been started and output is visible here. 2026-02-02 00:43:04.849377 | orchestrator | 2026-02-02 00:43:04.849468 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-02 00:43:04.849478 | orchestrator | 2026-02-02 00:43:04.849502 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-02 00:43:04.849510 | orchestrator | Monday 02 February 2026 00:42:55 +0000 (0:00:00.269) 0:00:00.269 ******* 2026-02-02 00:43:04.849516 | orchestrator | ok: [testbed-manager] 2026-02-02 00:43:04.849524 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:43:04.849530 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:43:04.849537 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:43:04.849544 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:43:04.849555 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:43:04.849565 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:43:04.849575 | orchestrator | 2026-02-02 00:43:04.849584 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-02 00:43:04.849594 | orchestrator | Monday 02 February 2026 00:42:56 +0000 (0:00:01.141) 0:00:01.410 ******* 2026-02-02 00:43:04.849605 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:43:04.849617 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:43:04.849627 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:43:04.849638 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:43:04.849648 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:04.849658 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:04.849670 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:04.849680 | orchestrator | 2026-02-02 00:43:04.849691 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-02 00:43:04.849719 | orchestrator | 2026-02-02 00:43:04.849730 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-02 00:43:04.849743 | orchestrator | Monday 02 February 2026 00:42:58 +0000 (0:00:01.203) 0:00:02.614 ******* 2026-02-02 00:43:04.849754 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:43:04.849766 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:43:04.849777 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:43:04.849789 | orchestrator | ok: [testbed-manager] 2026-02-02 00:43:04.849796 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:43:04.849802 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:43:04.849808 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:43:04.849815 | orchestrator | 2026-02-02 00:43:04.849821 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-02 00:43:04.849827 | orchestrator | 2026-02-02 00:43:04.849834 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-02 00:43:04.849840 | orchestrator | Monday 02 February 2026 00:43:03 +0000 (0:00:05.658) 0:00:08.272 ******* 2026-02-02 00:43:04.849846 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:43:04.849853 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:43:04.849859 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:43:04.849865 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:43:04.849871 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:04.849877 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:04.849883 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:04.849890 | orchestrator | 2026-02-02 00:43:04.849896 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:43:04.849903 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:43:04.849911 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:43:04.849917 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:43:04.849923 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:43:04.849970 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:43:04.849986 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:43:04.849995 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:43:04.850001 | orchestrator | 2026-02-02 00:43:04.850008 | orchestrator | 2026-02-02 00:43:04.850064 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:43:04.850072 | orchestrator | Monday 02 February 2026 00:43:04 +0000 (0:00:00.569) 0:00:08.842 ******* 2026-02-02 00:43:04.850079 | orchestrator | =============================================================================== 2026-02-02 00:43:04.850086 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.66s 2026-02-02 00:43:04.850093 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2026-02-02 00:43:04.850100 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2026-02-02 00:43:04.850107 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-02-02 00:43:07.318893 | orchestrator | 2026-02-02 00:43:07 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-02-02 00:43:07.385326 | orchestrator | 2026-02-02 00:43:07 | INFO  | Task 64d65cf5-4c43-412c-b598-f3bf28e47693 (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-02 00:43:07.385392 | orchestrator | 2026-02-02 00:43:07 | INFO  | It takes a moment until task 64d65cf5-4c43-412c-b598-f3bf28e47693 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-02 00:43:19.614553 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-02 00:43:19.614643 | orchestrator | 2.16.14 2026-02-02 00:43:19.614653 | orchestrator | 2026-02-02 00:43:19.614661 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-02 00:43:19.614668 | orchestrator | 2026-02-02 00:43:19.614675 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-02 00:43:19.614683 | orchestrator | Monday 02 February 2026 00:43:11 +0000 (0:00:00.327) 0:00:00.327 ******* 2026-02-02 00:43:19.614690 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-02 00:43:19.614697 | orchestrator | 2026-02-02 00:43:19.614703 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-02 00:43:19.614710 | orchestrator | Monday 02 February 2026 00:43:12 +0000 (0:00:00.238) 0:00:00.566 ******* 2026-02-02 00:43:19.614716 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:43:19.614723 | orchestrator | 2026-02-02 00:43:19.614730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:19.614737 | orchestrator | Monday 02 February 2026 00:43:12 +0000 (0:00:00.234) 0:00:00.800 ******* 2026-02-02 00:43:19.614750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-02 00:43:19.614757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-02 00:43:19.614764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-02 00:43:19.614770 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-02 00:43:19.614776 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-02 00:43:19.614783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-02 00:43:19.614789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-02 00:43:19.614795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-02 00:43:19.614802 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-02 00:43:19.614808 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-02 00:43:19.614832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-02 00:43:19.614839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-02 00:43:19.614845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-02 00:43:19.614852 | orchestrator | 2026-02-02 00:43:19.614858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:19.614864 | orchestrator | Monday 02 February 2026 00:43:12 +0000 (0:00:00.497) 0:00:01.298 ******* 2026-02-02 00:43:19.614871 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.614877 | orchestrator | 2026-02-02 00:43:19.614884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:19.614890 | orchestrator | Monday 02 February 2026 00:43:13 +0000 (0:00:00.220) 0:00:01.518 ******* 2026-02-02 00:43:19.614896 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.614903 | orchestrator | 2026-02-02 00:43:19.614909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:19.614919 | orchestrator | Monday 02 February 2026 00:43:13 +0000 (0:00:00.236) 0:00:01.754 ******* 2026-02-02 00:43:19.614926 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.614961 | orchestrator | 2026-02-02 00:43:19.614968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:19.614974 | orchestrator | Monday 02 February 2026 00:43:13 +0000 (0:00:00.226) 0:00:01.981 ******* 2026-02-02 00:43:19.614981 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.614987 | orchestrator | 2026-02-02 00:43:19.614994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:19.615000 | orchestrator | Monday 02 February 2026 00:43:13 +0000 (0:00:00.206) 0:00:02.187 ******* 2026-02-02 00:43:19.615007 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.615013 | orchestrator | 2026-02-02 00:43:19.615019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:19.615026 | orchestrator | Monday 02 February 2026 00:43:14 +0000 (0:00:00.188) 0:00:02.376 ******* 2026-02-02 00:43:19.615032 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.615039 | orchestrator | 2026-02-02 00:43:19.615045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:19.615051 | orchestrator | Monday 02 February 2026 00:43:14 +0000 (0:00:00.202) 0:00:02.578 ******* 2026-02-02 00:43:19.615058 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.615064 | orchestrator | 2026-02-02 00:43:19.615071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:19.615077 | orchestrator | Monday 02 February 2026 00:43:14 +0000 (0:00:00.183) 0:00:02.761 ******* 2026-02-02 00:43:19.615083 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.615090 | orchestrator | 2026-02-02 00:43:19.615097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:19.615105 | orchestrator | Monday 02 February 2026 00:43:14 +0000 (0:00:00.215) 0:00:02.977 ******* 2026-02-02 00:43:19.615113 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad) 2026-02-02 00:43:19.615121 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad) 2026-02-02 00:43:19.615128 | orchestrator | 2026-02-02 00:43:19.615135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:19.615156 | orchestrator | Monday 02 February 2026 00:43:15 +0000 (0:00:00.447) 0:00:03.424 ******* 2026-02-02 00:43:19.615164 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_08cb0a0c-22c5-4be4-bae6-cc486e6cf4a8) 2026-02-02 00:43:19.615171 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_08cb0a0c-22c5-4be4-bae6-cc486e6cf4a8) 2026-02-02 00:43:19.615178 | orchestrator | 2026-02-02 00:43:19.615190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:19.615203 | orchestrator | Monday 02 February 2026 00:43:15 +0000 (0:00:00.668) 0:00:04.092 ******* 2026-02-02 00:43:19.615210 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2791ba27-caf1-4c6d-bd50-3e0320bbaa42) 2026-02-02 00:43:19.615217 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2791ba27-caf1-4c6d-bd50-3e0320bbaa42) 2026-02-02 00:43:19.615224 | orchestrator | 2026-02-02 00:43:19.615231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:19.615239 | orchestrator | Monday 02 February 2026 00:43:16 +0000 (0:00:00.672) 0:00:04.765 ******* 2026-02-02 00:43:19.615246 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b0ea612a-524a-49e0-9350-b51de64b4b0f) 2026-02-02 00:43:19.615253 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b0ea612a-524a-49e0-9350-b51de64b4b0f) 2026-02-02 00:43:19.615260 | orchestrator | 2026-02-02 00:43:19.615267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:19.615274 | orchestrator | Monday 02 February 2026 00:43:17 +0000 (0:00:00.897) 0:00:05.663 ******* 2026-02-02 00:43:19.615281 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-02 00:43:19.615289 | orchestrator | 2026-02-02 00:43:19.615296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:19.615303 | orchestrator | Monday 02 February 2026 00:43:17 +0000 (0:00:00.357) 0:00:06.020 ******* 2026-02-02 00:43:19.615310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-02 00:43:19.615317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-02 00:43:19.615324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-02 00:43:19.615331 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-02 00:43:19.615338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-02 00:43:19.615345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-02 00:43:19.615352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-02 00:43:19.615360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-02 00:43:19.615366 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-02 00:43:19.615373 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-02 00:43:19.615381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-02 00:43:19.615388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-02 00:43:19.615396 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-02 00:43:19.615403 | orchestrator | 2026-02-02 00:43:19.615410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:19.615417 | orchestrator | Monday 02 February 2026 00:43:18 +0000 (0:00:00.449) 0:00:06.470 ******* 2026-02-02 00:43:19.615425 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.615433 | orchestrator | 2026-02-02 00:43:19.615440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:19.615448 | orchestrator | Monday 02 February 2026 00:43:18 +0000 (0:00:00.231) 0:00:06.701 ******* 2026-02-02 00:43:19.615454 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.615460 | orchestrator | 2026-02-02 00:43:19.615467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:19.615473 | orchestrator | Monday 02 February 2026 00:43:18 +0000 (0:00:00.229) 0:00:06.930 ******* 2026-02-02 00:43:19.615480 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.615491 | orchestrator | 2026-02-02 00:43:19.615502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:19.615514 | orchestrator | Monday 02 February 2026 00:43:18 +0000 (0:00:00.214) 0:00:07.145 ******* 2026-02-02 00:43:19.615530 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.615540 | orchestrator | 2026-02-02 00:43:19.615549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:19.615560 | orchestrator | Monday 02 February 2026 00:43:18 +0000 (0:00:00.184) 0:00:07.330 ******* 2026-02-02 00:43:19.615570 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.615580 | orchestrator | 2026-02-02 00:43:19.615589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:19.615600 | orchestrator | Monday 02 February 2026 00:43:19 +0000 (0:00:00.219) 0:00:07.549 ******* 2026-02-02 00:43:19.615609 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.615615 | orchestrator | 2026-02-02 00:43:19.615622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:19.615628 | orchestrator | Monday 02 February 2026 00:43:19 +0000 (0:00:00.218) 0:00:07.768 ******* 2026-02-02 00:43:19.615635 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:19.615641 | orchestrator | 2026-02-02 00:43:19.615652 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:27.611892 | orchestrator | Monday 02 February 2026 00:43:19 +0000 (0:00:00.198) 0:00:07.966 ******* 2026-02-02 00:43:27.612038 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.612057 | orchestrator | 2026-02-02 00:43:27.612070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:27.612082 | orchestrator | Monday 02 February 2026 00:43:19 +0000 (0:00:00.194) 0:00:08.161 ******* 2026-02-02 00:43:27.612093 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-02 00:43:27.612105 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-02 00:43:27.612117 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-02 00:43:27.612128 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-02 00:43:27.612139 | orchestrator | 2026-02-02 00:43:27.612151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:27.612181 | orchestrator | Monday 02 February 2026 00:43:20 +0000 (0:00:01.090) 0:00:09.251 ******* 2026-02-02 00:43:27.612193 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.612204 | orchestrator | 2026-02-02 00:43:27.612215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:27.612226 | orchestrator | Monday 02 February 2026 00:43:21 +0000 (0:00:00.243) 0:00:09.494 ******* 2026-02-02 00:43:27.612237 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.612248 | orchestrator | 2026-02-02 00:43:27.612259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:27.612271 | orchestrator | Monday 02 February 2026 00:43:21 +0000 (0:00:00.245) 0:00:09.740 ******* 2026-02-02 00:43:27.612282 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.612293 | orchestrator | 2026-02-02 00:43:27.612304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:27.612315 | orchestrator | Monday 02 February 2026 00:43:21 +0000 (0:00:00.208) 0:00:09.948 ******* 2026-02-02 00:43:27.612326 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.612338 | orchestrator | 2026-02-02 00:43:27.612349 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-02 00:43:27.612360 | orchestrator | Monday 02 February 2026 00:43:21 +0000 (0:00:00.215) 0:00:10.164 ******* 2026-02-02 00:43:27.612371 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-02 00:43:27.612382 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-02 00:43:27.612393 | orchestrator | 2026-02-02 00:43:27.612404 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-02 00:43:27.612415 | orchestrator | Monday 02 February 2026 00:43:22 +0000 (0:00:00.196) 0:00:10.360 ******* 2026-02-02 00:43:27.612450 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.612462 | orchestrator | 2026-02-02 00:43:27.612473 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-02 00:43:27.612484 | orchestrator | Monday 02 February 2026 00:43:22 +0000 (0:00:00.157) 0:00:10.517 ******* 2026-02-02 00:43:27.612495 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.612506 | orchestrator | 2026-02-02 00:43:27.612517 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-02 00:43:27.612528 | orchestrator | Monday 02 February 2026 00:43:22 +0000 (0:00:00.136) 0:00:10.653 ******* 2026-02-02 00:43:27.612539 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.612550 | orchestrator | 2026-02-02 00:43:27.612561 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-02 00:43:27.612572 | orchestrator | Monday 02 February 2026 00:43:22 +0000 (0:00:00.140) 0:00:10.794 ******* 2026-02-02 00:43:27.612583 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:43:27.612594 | orchestrator | 2026-02-02 00:43:27.612605 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-02 00:43:27.612616 | orchestrator | Monday 02 February 2026 00:43:22 +0000 (0:00:00.147) 0:00:10.942 ******* 2026-02-02 00:43:27.612629 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91c179ef-578a-54fb-a2b0-5b892bd3ac18'}}) 2026-02-02 00:43:27.612640 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '91730114-ee0c-5e20-9378-f20099298830'}}) 2026-02-02 00:43:27.612652 | orchestrator | 2026-02-02 00:43:27.612663 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-02 00:43:27.612674 | orchestrator | Monday 02 February 2026 00:43:22 +0000 (0:00:00.173) 0:00:11.115 ******* 2026-02-02 00:43:27.612685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91c179ef-578a-54fb-a2b0-5b892bd3ac18'}})  2026-02-02 00:43:27.612703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '91730114-ee0c-5e20-9378-f20099298830'}})  2026-02-02 00:43:27.612719 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.612731 | orchestrator | 2026-02-02 00:43:27.612742 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-02 00:43:27.612753 | orchestrator | Monday 02 February 2026 00:43:22 +0000 (0:00:00.178) 0:00:11.294 ******* 2026-02-02 00:43:27.612764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91c179ef-578a-54fb-a2b0-5b892bd3ac18'}})  2026-02-02 00:43:27.612775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '91730114-ee0c-5e20-9378-f20099298830'}})  2026-02-02 00:43:27.612786 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.612798 | orchestrator | 2026-02-02 00:43:27.612809 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-02 00:43:27.612820 | orchestrator | Monday 02 February 2026 00:43:23 +0000 (0:00:00.377) 0:00:11.672 ******* 2026-02-02 00:43:27.612830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91c179ef-578a-54fb-a2b0-5b892bd3ac18'}})  2026-02-02 00:43:27.612860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '91730114-ee0c-5e20-9378-f20099298830'}})  2026-02-02 00:43:27.612872 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.612883 | orchestrator | 2026-02-02 00:43:27.612894 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-02 00:43:27.612905 | orchestrator | Monday 02 February 2026 00:43:23 +0000 (0:00:00.157) 0:00:11.829 ******* 2026-02-02 00:43:27.612916 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:43:27.612927 | orchestrator | 2026-02-02 00:43:27.612965 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-02 00:43:27.612976 | orchestrator | Monday 02 February 2026 00:43:23 +0000 (0:00:00.155) 0:00:11.985 ******* 2026-02-02 00:43:27.612987 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:43:27.613007 | orchestrator | 2026-02-02 00:43:27.613018 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-02 00:43:27.613029 | orchestrator | Monday 02 February 2026 00:43:23 +0000 (0:00:00.139) 0:00:12.124 ******* 2026-02-02 00:43:27.613040 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.613051 | orchestrator | 2026-02-02 00:43:27.613062 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-02 00:43:27.613073 | orchestrator | Monday 02 February 2026 00:43:23 +0000 (0:00:00.149) 0:00:12.274 ******* 2026-02-02 00:43:27.613084 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.613095 | orchestrator | 2026-02-02 00:43:27.613106 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-02 00:43:27.613117 | orchestrator | Monday 02 February 2026 00:43:24 +0000 (0:00:00.143) 0:00:12.417 ******* 2026-02-02 00:43:27.613128 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.613139 | orchestrator | 2026-02-02 00:43:27.613150 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-02 00:43:27.613161 | orchestrator | Monday 02 February 2026 00:43:24 +0000 (0:00:00.141) 0:00:12.559 ******* 2026-02-02 00:43:27.613172 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 00:43:27.613183 | orchestrator |  "ceph_osd_devices": { 2026-02-02 00:43:27.613194 | orchestrator |  "sdb": { 2026-02-02 00:43:27.613206 | orchestrator |  "osd_lvm_uuid": "91c179ef-578a-54fb-a2b0-5b892bd3ac18" 2026-02-02 00:43:27.613217 | orchestrator |  }, 2026-02-02 00:43:27.613228 | orchestrator |  "sdc": { 2026-02-02 00:43:27.613239 | orchestrator |  "osd_lvm_uuid": "91730114-ee0c-5e20-9378-f20099298830" 2026-02-02 00:43:27.613250 | orchestrator |  } 2026-02-02 00:43:27.613261 | orchestrator |  } 2026-02-02 00:43:27.613273 | orchestrator | } 2026-02-02 00:43:27.613284 | orchestrator | 2026-02-02 00:43:27.613296 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-02 00:43:27.613307 | orchestrator | Monday 02 February 2026 00:43:24 +0000 (0:00:00.143) 0:00:12.703 ******* 2026-02-02 00:43:27.613318 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.613329 | orchestrator | 2026-02-02 00:43:27.613340 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-02 00:43:27.613351 | orchestrator | Monday 02 February 2026 00:43:24 +0000 (0:00:00.131) 0:00:12.835 ******* 2026-02-02 00:43:27.613362 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.613373 | orchestrator | 2026-02-02 00:43:27.613384 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-02 00:43:27.613395 | orchestrator | Monday 02 February 2026 00:43:24 +0000 (0:00:00.131) 0:00:12.966 ******* 2026-02-02 00:43:27.613407 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:43:27.613418 | orchestrator | 2026-02-02 00:43:27.613429 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-02 00:43:27.613440 | orchestrator | Monday 02 February 2026 00:43:24 +0000 (0:00:00.131) 0:00:13.098 ******* 2026-02-02 00:43:27.613451 | orchestrator | changed: [testbed-node-3] => { 2026-02-02 00:43:27.613462 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-02 00:43:27.613473 | orchestrator |  "ceph_osd_devices": { 2026-02-02 00:43:27.613484 | orchestrator |  "sdb": { 2026-02-02 00:43:27.613495 | orchestrator |  "osd_lvm_uuid": "91c179ef-578a-54fb-a2b0-5b892bd3ac18" 2026-02-02 00:43:27.613506 | orchestrator |  }, 2026-02-02 00:43:27.613517 | orchestrator |  "sdc": { 2026-02-02 00:43:27.613528 | orchestrator |  "osd_lvm_uuid": "91730114-ee0c-5e20-9378-f20099298830" 2026-02-02 00:43:27.613539 | orchestrator |  } 2026-02-02 00:43:27.613550 | orchestrator |  }, 2026-02-02 00:43:27.613561 | orchestrator |  "lvm_volumes": [ 2026-02-02 00:43:27.613572 | orchestrator |  { 2026-02-02 00:43:27.613583 | orchestrator |  "data": "osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18", 2026-02-02 00:43:27.613594 | orchestrator |  "data_vg": "ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18" 2026-02-02 00:43:27.613612 | orchestrator |  }, 2026-02-02 00:43:27.613623 | orchestrator |  { 2026-02-02 00:43:27.613635 | orchestrator |  "data": "osd-block-91730114-ee0c-5e20-9378-f20099298830", 2026-02-02 00:43:27.613646 | orchestrator |  "data_vg": "ceph-91730114-ee0c-5e20-9378-f20099298830" 2026-02-02 00:43:27.613657 | orchestrator |  } 2026-02-02 00:43:27.613668 | orchestrator |  ] 2026-02-02 00:43:27.613679 | orchestrator |  } 2026-02-02 00:43:27.613690 | orchestrator | } 2026-02-02 00:43:27.613701 | orchestrator | 2026-02-02 00:43:27.613712 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-02 00:43:27.613723 | orchestrator | Monday 02 February 2026 00:43:25 +0000 (0:00:00.423) 0:00:13.522 ******* 2026-02-02 00:43:27.613733 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-02 00:43:27.613744 | orchestrator | 2026-02-02 00:43:27.613755 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-02 00:43:27.613766 | orchestrator | 2026-02-02 00:43:27.613777 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-02 00:43:27.613788 | orchestrator | Monday 02 February 2026 00:43:27 +0000 (0:00:01.917) 0:00:15.440 ******* 2026-02-02 00:43:27.613799 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-02 00:43:27.613810 | orchestrator | 2026-02-02 00:43:27.613821 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-02 00:43:27.613832 | orchestrator | Monday 02 February 2026 00:43:27 +0000 (0:00:00.259) 0:00:15.699 ******* 2026-02-02 00:43:27.613843 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:43:27.613854 | orchestrator | 2026-02-02 00:43:27.613871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:35.865675 | orchestrator | Monday 02 February 2026 00:43:27 +0000 (0:00:00.266) 0:00:15.966 ******* 2026-02-02 00:43:35.865779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-02 00:43:35.865795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-02 00:43:35.865806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-02 00:43:35.865818 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-02 00:43:35.865829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-02 00:43:35.865840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-02 00:43:35.865851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-02 00:43:35.865868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-02 00:43:35.865879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-02 00:43:35.865891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-02 00:43:35.865902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-02 00:43:35.865913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-02 00:43:35.865994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-02 00:43:35.866008 | orchestrator | 2026-02-02 00:43:35.866085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:35.866097 | orchestrator | Monday 02 February 2026 00:43:28 +0000 (0:00:00.402) 0:00:16.369 ******* 2026-02-02 00:43:35.866109 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.866121 | orchestrator | 2026-02-02 00:43:35.866132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:35.866143 | orchestrator | Monday 02 February 2026 00:43:28 +0000 (0:00:00.218) 0:00:16.587 ******* 2026-02-02 00:43:35.866180 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.866192 | orchestrator | 2026-02-02 00:43:35.866203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:35.866214 | orchestrator | Monday 02 February 2026 00:43:28 +0000 (0:00:00.247) 0:00:16.834 ******* 2026-02-02 00:43:35.866225 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.866235 | orchestrator | 2026-02-02 00:43:35.866246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:35.866257 | orchestrator | Monday 02 February 2026 00:43:28 +0000 (0:00:00.191) 0:00:17.026 ******* 2026-02-02 00:43:35.866268 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.866279 | orchestrator | 2026-02-02 00:43:35.866290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:35.866301 | orchestrator | Monday 02 February 2026 00:43:28 +0000 (0:00:00.197) 0:00:17.223 ******* 2026-02-02 00:43:35.866312 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.866322 | orchestrator | 2026-02-02 00:43:35.866333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:35.866344 | orchestrator | Monday 02 February 2026 00:43:29 +0000 (0:00:00.626) 0:00:17.850 ******* 2026-02-02 00:43:35.866355 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.866366 | orchestrator | 2026-02-02 00:43:35.866377 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:35.866387 | orchestrator | Monday 02 February 2026 00:43:29 +0000 (0:00:00.214) 0:00:18.064 ******* 2026-02-02 00:43:35.866398 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.866409 | orchestrator | 2026-02-02 00:43:35.866420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:35.866431 | orchestrator | Monday 02 February 2026 00:43:29 +0000 (0:00:00.218) 0:00:18.282 ******* 2026-02-02 00:43:35.866441 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.866452 | orchestrator | 2026-02-02 00:43:35.866463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:35.866474 | orchestrator | Monday 02 February 2026 00:43:30 +0000 (0:00:00.218) 0:00:18.501 ******* 2026-02-02 00:43:35.866484 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf) 2026-02-02 00:43:35.866497 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf) 2026-02-02 00:43:35.866508 | orchestrator | 2026-02-02 00:43:35.866519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:35.866530 | orchestrator | Monday 02 February 2026 00:43:30 +0000 (0:00:00.415) 0:00:18.916 ******* 2026-02-02 00:43:35.866541 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8be21885-29f7-4026-87ce-cd032f624f70) 2026-02-02 00:43:35.866552 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8be21885-29f7-4026-87ce-cd032f624f70) 2026-02-02 00:43:35.866563 | orchestrator | 2026-02-02 00:43:35.866573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:35.866584 | orchestrator | Monday 02 February 2026 00:43:30 +0000 (0:00:00.429) 0:00:19.346 ******* 2026-02-02 00:43:35.866595 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_78cf2400-96ef-4814-8ef8-9c5b7903f7b2) 2026-02-02 00:43:35.866606 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_78cf2400-96ef-4814-8ef8-9c5b7903f7b2) 2026-02-02 00:43:35.866617 | orchestrator | 2026-02-02 00:43:35.866628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:35.866657 | orchestrator | Monday 02 February 2026 00:43:31 +0000 (0:00:00.446) 0:00:19.793 ******* 2026-02-02 00:43:35.866669 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2465f817-ac52-4990-8055-49becb307e2f) 2026-02-02 00:43:35.866680 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2465f817-ac52-4990-8055-49becb307e2f) 2026-02-02 00:43:35.866691 | orchestrator | 2026-02-02 00:43:35.866709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:35.866720 | orchestrator | Monday 02 February 2026 00:43:31 +0000 (0:00:00.467) 0:00:20.260 ******* 2026-02-02 00:43:35.866731 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-02 00:43:35.866742 | orchestrator | 2026-02-02 00:43:35.866753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:35.866764 | orchestrator | Monday 02 February 2026 00:43:32 +0000 (0:00:00.345) 0:00:20.606 ******* 2026-02-02 00:43:35.866774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-02 00:43:35.866785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-02 00:43:35.866802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-02 00:43:35.866814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-02 00:43:35.866825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-02 00:43:35.866835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-02 00:43:35.866846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-02 00:43:35.866857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-02 00:43:35.866868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-02 00:43:35.866879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-02 00:43:35.866889 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-02 00:43:35.866900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-02 00:43:35.866911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-02 00:43:35.866922 | orchestrator | 2026-02-02 00:43:35.866954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:35.866965 | orchestrator | Monday 02 February 2026 00:43:32 +0000 (0:00:00.418) 0:00:21.024 ******* 2026-02-02 00:43:35.866976 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.866987 | orchestrator | 2026-02-02 00:43:35.866998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:35.867009 | orchestrator | Monday 02 February 2026 00:43:33 +0000 (0:00:00.719) 0:00:21.744 ******* 2026-02-02 00:43:35.867020 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.867030 | orchestrator | 2026-02-02 00:43:35.867041 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:35.867052 | orchestrator | Monday 02 February 2026 00:43:33 +0000 (0:00:00.204) 0:00:21.948 ******* 2026-02-02 00:43:35.867063 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.867074 | orchestrator | 2026-02-02 00:43:35.867085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:35.867095 | orchestrator | Monday 02 February 2026 00:43:33 +0000 (0:00:00.214) 0:00:22.163 ******* 2026-02-02 00:43:35.867106 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.867117 | orchestrator | 2026-02-02 00:43:35.867128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:35.867139 | orchestrator | Monday 02 February 2026 00:43:34 +0000 (0:00:00.196) 0:00:22.359 ******* 2026-02-02 00:43:35.867150 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.867160 | orchestrator | 2026-02-02 00:43:35.867171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:35.867182 | orchestrator | Monday 02 February 2026 00:43:34 +0000 (0:00:00.211) 0:00:22.571 ******* 2026-02-02 00:43:35.867193 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.867214 | orchestrator | 2026-02-02 00:43:35.867228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:35.867246 | orchestrator | Monday 02 February 2026 00:43:34 +0000 (0:00:00.235) 0:00:22.807 ******* 2026-02-02 00:43:35.867265 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.867283 | orchestrator | 2026-02-02 00:43:35.867300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:35.867316 | orchestrator | Monday 02 February 2026 00:43:34 +0000 (0:00:00.209) 0:00:23.017 ******* 2026-02-02 00:43:35.867334 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:35.867350 | orchestrator | 2026-02-02 00:43:35.867365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:35.867383 | orchestrator | Monday 02 February 2026 00:43:34 +0000 (0:00:00.207) 0:00:23.224 ******* 2026-02-02 00:43:35.867401 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-02 00:43:35.867419 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-02 00:43:35.867439 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-02 00:43:35.867458 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-02 00:43:35.867471 | orchestrator | 2026-02-02 00:43:35.867482 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:35.867493 | orchestrator | Monday 02 February 2026 00:43:35 +0000 (0:00:00.867) 0:00:24.092 ******* 2026-02-02 00:43:35.867504 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.019755 | orchestrator | 2026-02-02 00:43:43.019836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:43.019846 | orchestrator | Monday 02 February 2026 00:43:35 +0000 (0:00:00.207) 0:00:24.300 ******* 2026-02-02 00:43:43.019851 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.019858 | orchestrator | 2026-02-02 00:43:43.019864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:43.019870 | orchestrator | Monday 02 February 2026 00:43:36 +0000 (0:00:00.200) 0:00:24.501 ******* 2026-02-02 00:43:43.019875 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.019880 | orchestrator | 2026-02-02 00:43:43.019886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:43.019891 | orchestrator | Monday 02 February 2026 00:43:36 +0000 (0:00:00.192) 0:00:24.693 ******* 2026-02-02 00:43:43.019896 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.019902 | orchestrator | 2026-02-02 00:43:43.019907 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-02 00:43:43.019912 | orchestrator | Monday 02 February 2026 00:43:37 +0000 (0:00:00.769) 0:00:25.463 ******* 2026-02-02 00:43:43.019918 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-02 00:43:43.019923 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-02 00:43:43.019929 | orchestrator | 2026-02-02 00:43:43.019989 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-02 00:43:43.020008 | orchestrator | Monday 02 February 2026 00:43:37 +0000 (0:00:00.195) 0:00:25.659 ******* 2026-02-02 00:43:43.020014 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.020020 | orchestrator | 2026-02-02 00:43:43.020025 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-02 00:43:43.020031 | orchestrator | Monday 02 February 2026 00:43:37 +0000 (0:00:00.152) 0:00:25.811 ******* 2026-02-02 00:43:43.020036 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.020041 | orchestrator | 2026-02-02 00:43:43.020047 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-02 00:43:43.020055 | orchestrator | Monday 02 February 2026 00:43:37 +0000 (0:00:00.145) 0:00:25.956 ******* 2026-02-02 00:43:43.020060 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.020066 | orchestrator | 2026-02-02 00:43:43.020071 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-02 00:43:43.020076 | orchestrator | Monday 02 February 2026 00:43:37 +0000 (0:00:00.154) 0:00:26.111 ******* 2026-02-02 00:43:43.020098 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:43:43.020105 | orchestrator | 2026-02-02 00:43:43.020111 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-02 00:43:43.020116 | orchestrator | Monday 02 February 2026 00:43:37 +0000 (0:00:00.191) 0:00:26.303 ******* 2026-02-02 00:43:43.020122 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '604951f0-1bde-54b3-957a-2369560b0fa2'}}) 2026-02-02 00:43:43.020128 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edd20676-fc89-5b2b-b977-99722e90cce2'}}) 2026-02-02 00:43:43.020134 | orchestrator | 2026-02-02 00:43:43.020139 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-02 00:43:43.020144 | orchestrator | Monday 02 February 2026 00:43:38 +0000 (0:00:00.206) 0:00:26.509 ******* 2026-02-02 00:43:43.020151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '604951f0-1bde-54b3-957a-2369560b0fa2'}})  2026-02-02 00:43:43.020158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edd20676-fc89-5b2b-b977-99722e90cce2'}})  2026-02-02 00:43:43.020163 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.020169 | orchestrator | 2026-02-02 00:43:43.020174 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-02 00:43:43.020179 | orchestrator | Monday 02 February 2026 00:43:38 +0000 (0:00:00.169) 0:00:26.679 ******* 2026-02-02 00:43:43.020185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '604951f0-1bde-54b3-957a-2369560b0fa2'}})  2026-02-02 00:43:43.020190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edd20676-fc89-5b2b-b977-99722e90cce2'}})  2026-02-02 00:43:43.020196 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.020201 | orchestrator | 2026-02-02 00:43:43.020207 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-02 00:43:43.020212 | orchestrator | Monday 02 February 2026 00:43:38 +0000 (0:00:00.192) 0:00:26.871 ******* 2026-02-02 00:43:43.020218 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '604951f0-1bde-54b3-957a-2369560b0fa2'}})  2026-02-02 00:43:43.020223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edd20676-fc89-5b2b-b977-99722e90cce2'}})  2026-02-02 00:43:43.020228 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.020234 | orchestrator | 2026-02-02 00:43:43.020239 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-02 00:43:43.020245 | orchestrator | Monday 02 February 2026 00:43:38 +0000 (0:00:00.187) 0:00:27.058 ******* 2026-02-02 00:43:43.020250 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:43:43.020255 | orchestrator | 2026-02-02 00:43:43.020261 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-02 00:43:43.020266 | orchestrator | Monday 02 February 2026 00:43:38 +0000 (0:00:00.140) 0:00:27.199 ******* 2026-02-02 00:43:43.020272 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:43:43.020277 | orchestrator | 2026-02-02 00:43:43.020282 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-02 00:43:43.020288 | orchestrator | Monday 02 February 2026 00:43:38 +0000 (0:00:00.142) 0:00:27.341 ******* 2026-02-02 00:43:43.020304 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.020309 | orchestrator | 2026-02-02 00:43:43.020315 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-02 00:43:43.020322 | orchestrator | Monday 02 February 2026 00:43:39 +0000 (0:00:00.422) 0:00:27.764 ******* 2026-02-02 00:43:43.020328 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.020335 | orchestrator | 2026-02-02 00:43:43.020341 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-02 00:43:43.020347 | orchestrator | Monday 02 February 2026 00:43:39 +0000 (0:00:00.145) 0:00:27.909 ******* 2026-02-02 00:43:43.020353 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.020363 | orchestrator | 2026-02-02 00:43:43.020370 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-02 00:43:43.020376 | orchestrator | Monday 02 February 2026 00:43:39 +0000 (0:00:00.136) 0:00:28.045 ******* 2026-02-02 00:43:43.020382 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 00:43:43.020388 | orchestrator |  "ceph_osd_devices": { 2026-02-02 00:43:43.020395 | orchestrator |  "sdb": { 2026-02-02 00:43:43.020401 | orchestrator |  "osd_lvm_uuid": "604951f0-1bde-54b3-957a-2369560b0fa2" 2026-02-02 00:43:43.020408 | orchestrator |  }, 2026-02-02 00:43:43.020414 | orchestrator |  "sdc": { 2026-02-02 00:43:43.020420 | orchestrator |  "osd_lvm_uuid": "edd20676-fc89-5b2b-b977-99722e90cce2" 2026-02-02 00:43:43.020426 | orchestrator |  } 2026-02-02 00:43:43.020433 | orchestrator |  } 2026-02-02 00:43:43.020439 | orchestrator | } 2026-02-02 00:43:43.020445 | orchestrator | 2026-02-02 00:43:43.020451 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-02 00:43:43.020457 | orchestrator | Monday 02 February 2026 00:43:39 +0000 (0:00:00.151) 0:00:28.197 ******* 2026-02-02 00:43:43.020463 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.020470 | orchestrator | 2026-02-02 00:43:43.020475 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-02 00:43:43.020481 | orchestrator | Monday 02 February 2026 00:43:39 +0000 (0:00:00.137) 0:00:28.334 ******* 2026-02-02 00:43:43.020488 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.020494 | orchestrator | 2026-02-02 00:43:43.020500 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-02 00:43:43.020506 | orchestrator | Monday 02 February 2026 00:43:40 +0000 (0:00:00.141) 0:00:28.476 ******* 2026-02-02 00:43:43.020512 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:43:43.020518 | orchestrator | 2026-02-02 00:43:43.020525 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-02 00:43:43.020534 | orchestrator | Monday 02 February 2026 00:43:40 +0000 (0:00:00.140) 0:00:28.616 ******* 2026-02-02 00:43:43.020541 | orchestrator | changed: [testbed-node-4] => { 2026-02-02 00:43:43.020547 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-02 00:43:43.020554 | orchestrator |  "ceph_osd_devices": { 2026-02-02 00:43:43.020560 | orchestrator |  "sdb": { 2026-02-02 00:43:43.020567 | orchestrator |  "osd_lvm_uuid": "604951f0-1bde-54b3-957a-2369560b0fa2" 2026-02-02 00:43:43.020573 | orchestrator |  }, 2026-02-02 00:43:43.020580 | orchestrator |  "sdc": { 2026-02-02 00:43:43.020586 | orchestrator |  "osd_lvm_uuid": "edd20676-fc89-5b2b-b977-99722e90cce2" 2026-02-02 00:43:43.020592 | orchestrator |  } 2026-02-02 00:43:43.020599 | orchestrator |  }, 2026-02-02 00:43:43.020605 | orchestrator |  "lvm_volumes": [ 2026-02-02 00:43:43.020611 | orchestrator |  { 2026-02-02 00:43:43.020618 | orchestrator |  "data": "osd-block-604951f0-1bde-54b3-957a-2369560b0fa2", 2026-02-02 00:43:43.020624 | orchestrator |  "data_vg": "ceph-604951f0-1bde-54b3-957a-2369560b0fa2" 2026-02-02 00:43:43.020631 | orchestrator |  }, 2026-02-02 00:43:43.020637 | orchestrator |  { 2026-02-02 00:43:43.020644 | orchestrator |  "data": "osd-block-edd20676-fc89-5b2b-b977-99722e90cce2", 2026-02-02 00:43:43.020650 | orchestrator |  "data_vg": "ceph-edd20676-fc89-5b2b-b977-99722e90cce2" 2026-02-02 00:43:43.020656 | orchestrator |  } 2026-02-02 00:43:43.020662 | orchestrator |  ] 2026-02-02 00:43:43.020669 | orchestrator |  } 2026-02-02 00:43:43.020676 | orchestrator | } 2026-02-02 00:43:43.020682 | orchestrator | 2026-02-02 00:43:43.020689 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-02 00:43:43.020695 | orchestrator | Monday 02 February 2026 00:43:40 +0000 (0:00:00.244) 0:00:28.861 ******* 2026-02-02 00:43:43.020701 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-02 00:43:43.020707 | orchestrator | 2026-02-02 00:43:43.020716 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-02 00:43:43.020721 | orchestrator | 2026-02-02 00:43:43.020727 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-02 00:43:43.020733 | orchestrator | Monday 02 February 2026 00:43:41 +0000 (0:00:01.147) 0:00:30.009 ******* 2026-02-02 00:43:43.020738 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-02 00:43:43.020744 | orchestrator | 2026-02-02 00:43:43.020749 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-02 00:43:43.020755 | orchestrator | Monday 02 February 2026 00:43:42 +0000 (0:00:00.773) 0:00:30.783 ******* 2026-02-02 00:43:43.020761 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:43:43.020766 | orchestrator | 2026-02-02 00:43:43.020772 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:43.020777 | orchestrator | Monday 02 February 2026 00:43:42 +0000 (0:00:00.275) 0:00:31.058 ******* 2026-02-02 00:43:43.020783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-02 00:43:43.020788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-02 00:43:43.020794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-02 00:43:43.020800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-02 00:43:43.020805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-02 00:43:43.020814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-02 00:43:52.068387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-02 00:43:52.068482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-02 00:43:52.068494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-02 00:43:52.068503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-02 00:43:52.068512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-02 00:43:52.068520 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-02 00:43:52.068529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-02 00:43:52.068538 | orchestrator | 2026-02-02 00:43:52.068548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:52.068557 | orchestrator | Monday 02 February 2026 00:43:43 +0000 (0:00:00.408) 0:00:31.467 ******* 2026-02-02 00:43:52.068566 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.068576 | orchestrator | 2026-02-02 00:43:52.068584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:52.068593 | orchestrator | Monday 02 February 2026 00:43:43 +0000 (0:00:00.293) 0:00:31.761 ******* 2026-02-02 00:43:52.068602 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.068610 | orchestrator | 2026-02-02 00:43:52.068619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:52.068628 | orchestrator | Monday 02 February 2026 00:43:43 +0000 (0:00:00.250) 0:00:32.011 ******* 2026-02-02 00:43:52.068636 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.068645 | orchestrator | 2026-02-02 00:43:52.068653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:52.068662 | orchestrator | Monday 02 February 2026 00:43:43 +0000 (0:00:00.196) 0:00:32.208 ******* 2026-02-02 00:43:52.068670 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.068679 | orchestrator | 2026-02-02 00:43:52.068688 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:52.068696 | orchestrator | Monday 02 February 2026 00:43:44 +0000 (0:00:00.191) 0:00:32.400 ******* 2026-02-02 00:43:52.068727 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.068736 | orchestrator | 2026-02-02 00:43:52.068744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:52.068752 | orchestrator | Monday 02 February 2026 00:43:44 +0000 (0:00:00.181) 0:00:32.581 ******* 2026-02-02 00:43:52.068760 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.068768 | orchestrator | 2026-02-02 00:43:52.068777 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:52.068785 | orchestrator | Monday 02 February 2026 00:43:44 +0000 (0:00:00.195) 0:00:32.777 ******* 2026-02-02 00:43:52.068793 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.068801 | orchestrator | 2026-02-02 00:43:52.068809 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:52.068818 | orchestrator | Monday 02 February 2026 00:43:44 +0000 (0:00:00.214) 0:00:32.991 ******* 2026-02-02 00:43:52.068826 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.068834 | orchestrator | 2026-02-02 00:43:52.068842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:52.068850 | orchestrator | Monday 02 February 2026 00:43:44 +0000 (0:00:00.197) 0:00:33.188 ******* 2026-02-02 00:43:52.068858 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db) 2026-02-02 00:43:52.068867 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db) 2026-02-02 00:43:52.068875 | orchestrator | 2026-02-02 00:43:52.068883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:52.068891 | orchestrator | Monday 02 February 2026 00:43:45 +0000 (0:00:00.868) 0:00:34.057 ******* 2026-02-02 00:43:52.068913 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_09bfd8bd-6f2f-4d2c-8da9-081114b71f81) 2026-02-02 00:43:52.068922 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_09bfd8bd-6f2f-4d2c-8da9-081114b71f81) 2026-02-02 00:43:52.068957 | orchestrator | 2026-02-02 00:43:52.068968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:52.068978 | orchestrator | Monday 02 February 2026 00:43:46 +0000 (0:00:00.544) 0:00:34.602 ******* 2026-02-02 00:43:52.068988 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_16c98517-e0bb-4d3e-881d-5a0c6479c324) 2026-02-02 00:43:52.068997 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_16c98517-e0bb-4d3e-881d-5a0c6479c324) 2026-02-02 00:43:52.069007 | orchestrator | 2026-02-02 00:43:52.069016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:52.069035 | orchestrator | Monday 02 February 2026 00:43:46 +0000 (0:00:00.455) 0:00:35.058 ******* 2026-02-02 00:43:52.069044 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b6ca68c3-b66c-4649-954f-01a9ba336075) 2026-02-02 00:43:52.069053 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b6ca68c3-b66c-4649-954f-01a9ba336075) 2026-02-02 00:43:52.069063 | orchestrator | 2026-02-02 00:43:52.069072 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:43:52.069081 | orchestrator | Monday 02 February 2026 00:43:47 +0000 (0:00:00.452) 0:00:35.510 ******* 2026-02-02 00:43:52.069091 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-02 00:43:52.069101 | orchestrator | 2026-02-02 00:43:52.069110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:52.069133 | orchestrator | Monday 02 February 2026 00:43:47 +0000 (0:00:00.500) 0:00:36.010 ******* 2026-02-02 00:43:52.069143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-02 00:43:52.069152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-02 00:43:52.069162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-02 00:43:52.069171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-02 00:43:52.069188 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-02 00:43:52.069197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-02 00:43:52.069206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-02 00:43:52.069216 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-02 00:43:52.069225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-02 00:43:52.069234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-02 00:43:52.069241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-02 00:43:52.069250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-02 00:43:52.069257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-02 00:43:52.069265 | orchestrator | 2026-02-02 00:43:52.069273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:52.069281 | orchestrator | Monday 02 February 2026 00:43:48 +0000 (0:00:00.451) 0:00:36.462 ******* 2026-02-02 00:43:52.069289 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.069297 | orchestrator | 2026-02-02 00:43:52.069305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:52.069313 | orchestrator | Monday 02 February 2026 00:43:48 +0000 (0:00:00.211) 0:00:36.673 ******* 2026-02-02 00:43:52.069321 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.069329 | orchestrator | 2026-02-02 00:43:52.069337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:52.069345 | orchestrator | Monday 02 February 2026 00:43:48 +0000 (0:00:00.193) 0:00:36.867 ******* 2026-02-02 00:43:52.069353 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.069366 | orchestrator | 2026-02-02 00:43:52.069379 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:52.069390 | orchestrator | Monday 02 February 2026 00:43:48 +0000 (0:00:00.207) 0:00:37.074 ******* 2026-02-02 00:43:52.069404 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.069419 | orchestrator | 2026-02-02 00:43:52.069429 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:52.069437 | orchestrator | Monday 02 February 2026 00:43:48 +0000 (0:00:00.212) 0:00:37.286 ******* 2026-02-02 00:43:52.069445 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.069453 | orchestrator | 2026-02-02 00:43:52.069461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:52.069469 | orchestrator | Monday 02 February 2026 00:43:49 +0000 (0:00:00.210) 0:00:37.497 ******* 2026-02-02 00:43:52.069477 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.069485 | orchestrator | 2026-02-02 00:43:52.069493 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:52.069501 | orchestrator | Monday 02 February 2026 00:43:49 +0000 (0:00:00.756) 0:00:38.254 ******* 2026-02-02 00:43:52.069509 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.069517 | orchestrator | 2026-02-02 00:43:52.069525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:52.069533 | orchestrator | Monday 02 February 2026 00:43:50 +0000 (0:00:00.235) 0:00:38.489 ******* 2026-02-02 00:43:52.069541 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.069549 | orchestrator | 2026-02-02 00:43:52.069557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:52.069565 | orchestrator | Monday 02 February 2026 00:43:50 +0000 (0:00:00.229) 0:00:38.719 ******* 2026-02-02 00:43:52.069573 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-02 00:43:52.069625 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-02 00:43:52.069634 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-02 00:43:52.069642 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-02 00:43:52.069650 | orchestrator | 2026-02-02 00:43:52.069658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:52.069667 | orchestrator | Monday 02 February 2026 00:43:51 +0000 (0:00:00.823) 0:00:39.543 ******* 2026-02-02 00:43:52.069674 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.069682 | orchestrator | 2026-02-02 00:43:52.069691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:52.069699 | orchestrator | Monday 02 February 2026 00:43:51 +0000 (0:00:00.255) 0:00:39.798 ******* 2026-02-02 00:43:52.069707 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.069715 | orchestrator | 2026-02-02 00:43:52.069723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:52.069731 | orchestrator | Monday 02 February 2026 00:43:51 +0000 (0:00:00.196) 0:00:39.995 ******* 2026-02-02 00:43:52.069739 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.069747 | orchestrator | 2026-02-02 00:43:52.069755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:43:52.069763 | orchestrator | Monday 02 February 2026 00:43:51 +0000 (0:00:00.197) 0:00:40.192 ******* 2026-02-02 00:43:52.069771 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:52.069779 | orchestrator | 2026-02-02 00:43:52.069793 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-02 00:43:56.442516 | orchestrator | Monday 02 February 2026 00:43:52 +0000 (0:00:00.233) 0:00:40.427 ******* 2026-02-02 00:43:56.442600 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-02 00:43:56.442610 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-02 00:43:56.442618 | orchestrator | 2026-02-02 00:43:56.442626 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-02 00:43:56.442634 | orchestrator | Monday 02 February 2026 00:43:52 +0000 (0:00:00.186) 0:00:40.614 ******* 2026-02-02 00:43:56.442641 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:56.442648 | orchestrator | 2026-02-02 00:43:56.442655 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-02 00:43:56.442662 | orchestrator | Monday 02 February 2026 00:43:52 +0000 (0:00:00.397) 0:00:41.011 ******* 2026-02-02 00:43:56.442685 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:56.442692 | orchestrator | 2026-02-02 00:43:56.442699 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-02 00:43:56.442706 | orchestrator | Monday 02 February 2026 00:43:52 +0000 (0:00:00.115) 0:00:41.127 ******* 2026-02-02 00:43:56.442713 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:56.442720 | orchestrator | 2026-02-02 00:43:56.442728 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-02 00:43:56.442735 | orchestrator | Monday 02 February 2026 00:43:53 +0000 (0:00:00.267) 0:00:41.395 ******* 2026-02-02 00:43:56.442742 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:43:56.442749 | orchestrator | 2026-02-02 00:43:56.442757 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-02 00:43:56.442764 | orchestrator | Monday 02 February 2026 00:43:53 +0000 (0:00:00.115) 0:00:41.510 ******* 2026-02-02 00:43:56.442771 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'}}) 2026-02-02 00:43:56.442782 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0f572543-3461-541d-9614-18cfec52b251'}}) 2026-02-02 00:43:56.442789 | orchestrator | 2026-02-02 00:43:56.442796 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-02 00:43:56.442803 | orchestrator | Monday 02 February 2026 00:43:53 +0000 (0:00:00.162) 0:00:41.673 ******* 2026-02-02 00:43:56.442810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'}})  2026-02-02 00:43:56.442838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0f572543-3461-541d-9614-18cfec52b251'}})  2026-02-02 00:43:56.442846 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:56.442853 | orchestrator | 2026-02-02 00:43:56.442859 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-02 00:43:56.442866 | orchestrator | Monday 02 February 2026 00:43:53 +0000 (0:00:00.147) 0:00:41.820 ******* 2026-02-02 00:43:56.442873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'}})  2026-02-02 00:43:56.442880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0f572543-3461-541d-9614-18cfec52b251'}})  2026-02-02 00:43:56.442886 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:56.442893 | orchestrator | 2026-02-02 00:43:56.442900 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-02 00:43:56.442906 | orchestrator | Monday 02 February 2026 00:43:53 +0000 (0:00:00.129) 0:00:41.950 ******* 2026-02-02 00:43:56.442913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'}})  2026-02-02 00:43:56.442920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0f572543-3461-541d-9614-18cfec52b251'}})  2026-02-02 00:43:56.442927 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:56.442994 | orchestrator | 2026-02-02 00:43:56.443002 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-02 00:43:56.443009 | orchestrator | Monday 02 February 2026 00:43:53 +0000 (0:00:00.128) 0:00:42.079 ******* 2026-02-02 00:43:56.443016 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:43:56.443023 | orchestrator | 2026-02-02 00:43:56.443029 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-02 00:43:56.443036 | orchestrator | Monday 02 February 2026 00:43:53 +0000 (0:00:00.131) 0:00:42.211 ******* 2026-02-02 00:43:56.443043 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:43:56.443050 | orchestrator | 2026-02-02 00:43:56.443057 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-02 00:43:56.443064 | orchestrator | Monday 02 February 2026 00:43:53 +0000 (0:00:00.119) 0:00:42.330 ******* 2026-02-02 00:43:56.443072 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:56.443079 | orchestrator | 2026-02-02 00:43:56.443087 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-02 00:43:56.443094 | orchestrator | Monday 02 February 2026 00:43:54 +0000 (0:00:00.145) 0:00:42.476 ******* 2026-02-02 00:43:56.443102 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:56.443110 | orchestrator | 2026-02-02 00:43:56.443118 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-02 00:43:56.443126 | orchestrator | Monday 02 February 2026 00:43:54 +0000 (0:00:00.110) 0:00:42.587 ******* 2026-02-02 00:43:56.443133 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:56.443141 | orchestrator | 2026-02-02 00:43:56.443149 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-02 00:43:56.443157 | orchestrator | Monday 02 February 2026 00:43:54 +0000 (0:00:00.144) 0:00:42.732 ******* 2026-02-02 00:43:56.443165 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 00:43:56.443173 | orchestrator |  "ceph_osd_devices": { 2026-02-02 00:43:56.443180 | orchestrator |  "sdb": { 2026-02-02 00:43:56.443202 | orchestrator |  "osd_lvm_uuid": "ee22aeb6-8be3-5eb7-a208-f7c11744cdf7" 2026-02-02 00:43:56.443209 | orchestrator |  }, 2026-02-02 00:43:56.443216 | orchestrator |  "sdc": { 2026-02-02 00:43:56.443223 | orchestrator |  "osd_lvm_uuid": "0f572543-3461-541d-9614-18cfec52b251" 2026-02-02 00:43:56.443230 | orchestrator |  } 2026-02-02 00:43:56.443237 | orchestrator |  } 2026-02-02 00:43:56.443244 | orchestrator | } 2026-02-02 00:43:56.443251 | orchestrator | 2026-02-02 00:43:56.443267 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-02 00:43:56.443274 | orchestrator | Monday 02 February 2026 00:43:54 +0000 (0:00:00.152) 0:00:42.884 ******* 2026-02-02 00:43:56.443281 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:56.443288 | orchestrator | 2026-02-02 00:43:56.443295 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-02 00:43:56.443302 | orchestrator | Monday 02 February 2026 00:43:54 +0000 (0:00:00.163) 0:00:43.048 ******* 2026-02-02 00:43:56.443308 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:56.443315 | orchestrator | 2026-02-02 00:43:56.443322 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-02 00:43:56.443329 | orchestrator | Monday 02 February 2026 00:43:55 +0000 (0:00:00.325) 0:00:43.373 ******* 2026-02-02 00:43:56.443336 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:43:56.443343 | orchestrator | 2026-02-02 00:43:56.443349 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-02 00:43:56.443356 | orchestrator | Monday 02 February 2026 00:43:55 +0000 (0:00:00.169) 0:00:43.543 ******* 2026-02-02 00:43:56.443363 | orchestrator | changed: [testbed-node-5] => { 2026-02-02 00:43:56.443370 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-02 00:43:56.443377 | orchestrator |  "ceph_osd_devices": { 2026-02-02 00:43:56.443384 | orchestrator |  "sdb": { 2026-02-02 00:43:56.443391 | orchestrator |  "osd_lvm_uuid": "ee22aeb6-8be3-5eb7-a208-f7c11744cdf7" 2026-02-02 00:43:56.443398 | orchestrator |  }, 2026-02-02 00:43:56.443404 | orchestrator |  "sdc": { 2026-02-02 00:43:56.443411 | orchestrator |  "osd_lvm_uuid": "0f572543-3461-541d-9614-18cfec52b251" 2026-02-02 00:43:56.443418 | orchestrator |  } 2026-02-02 00:43:56.443425 | orchestrator |  }, 2026-02-02 00:43:56.443432 | orchestrator |  "lvm_volumes": [ 2026-02-02 00:43:56.443438 | orchestrator |  { 2026-02-02 00:43:56.443445 | orchestrator |  "data": "osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7", 2026-02-02 00:43:56.443452 | orchestrator |  "data_vg": "ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7" 2026-02-02 00:43:56.443459 | orchestrator |  }, 2026-02-02 00:43:56.443469 | orchestrator |  { 2026-02-02 00:43:56.443476 | orchestrator |  "data": "osd-block-0f572543-3461-541d-9614-18cfec52b251", 2026-02-02 00:43:56.443483 | orchestrator |  "data_vg": "ceph-0f572543-3461-541d-9614-18cfec52b251" 2026-02-02 00:43:56.443490 | orchestrator |  } 2026-02-02 00:43:56.443497 | orchestrator |  ] 2026-02-02 00:43:56.443503 | orchestrator |  } 2026-02-02 00:43:56.443510 | orchestrator | } 2026-02-02 00:43:56.443517 | orchestrator | 2026-02-02 00:43:56.443524 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-02 00:43:56.443531 | orchestrator | Monday 02 February 2026 00:43:55 +0000 (0:00:00.268) 0:00:43.811 ******* 2026-02-02 00:43:56.443537 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-02 00:43:56.443544 | orchestrator | 2026-02-02 00:43:56.443551 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:43:56.443558 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-02 00:43:56.443566 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-02 00:43:56.443573 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-02 00:43:56.443580 | orchestrator | 2026-02-02 00:43:56.443586 | orchestrator | 2026-02-02 00:43:56.443593 | orchestrator | 2026-02-02 00:43:56.443600 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:43:56.443607 | orchestrator | Monday 02 February 2026 00:43:56 +0000 (0:00:00.961) 0:00:44.773 ******* 2026-02-02 00:43:56.443620 | orchestrator | =============================================================================== 2026-02-02 00:43:56.443626 | orchestrator | Write configuration file ------------------------------------------------ 4.03s 2026-02-02 00:43:56.443633 | orchestrator | Add known partitions to the list of available block devices ------------- 1.32s 2026-02-02 00:43:56.443650 | orchestrator | Add known links to the list of available block devices ------------------ 1.31s 2026-02-02 00:43:56.443657 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.27s 2026-02-02 00:43:56.443664 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2026-02-02 00:43:56.443670 | orchestrator | Print configuration data ------------------------------------------------ 0.94s 2026-02-02 00:43:56.443677 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2026-02-02 00:43:56.443684 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2026-02-02 00:43:56.443691 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-02-02 00:43:56.443697 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2026-02-02 00:43:56.443704 | orchestrator | Get initial list of available block devices ----------------------------- 0.78s 2026-02-02 00:43:56.443711 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-02-02 00:43:56.443718 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-02-02 00:43:56.443729 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-02-02 00:43:56.810879 | orchestrator | Set DB devices config data ---------------------------------------------- 0.72s 2026-02-02 00:43:56.810999 | orchestrator | Generate WAL VG names --------------------------------------------------- 0.71s 2026-02-02 00:43:56.811011 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.70s 2026-02-02 00:43:56.811017 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-02-02 00:43:56.811023 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-02-02 00:43:56.811029 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-02-02 00:44:19.505481 | orchestrator | 2026-02-02 00:44:19 | INFO  | Task 6b1fc40f-3ab6-48a4-9d70-2cc600780bed (sync inventory) is running in background. Output coming soon. 2026-02-02 00:44:49.138976 | orchestrator | 2026-02-02 00:44:21 | INFO  | Starting group_vars file reorganization 2026-02-02 00:44:49.139070 | orchestrator | 2026-02-02 00:44:21 | INFO  | Moved 0 file(s) to their respective directories 2026-02-02 00:44:49.139077 | orchestrator | 2026-02-02 00:44:21 | INFO  | Group_vars file reorganization completed 2026-02-02 00:44:49.139082 | orchestrator | 2026-02-02 00:44:24 | INFO  | Starting variable preparation from inventory 2026-02-02 00:44:49.139087 | orchestrator | 2026-02-02 00:44:27 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-02 00:44:49.139092 | orchestrator | 2026-02-02 00:44:27 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-02 00:44:49.139110 | orchestrator | 2026-02-02 00:44:27 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-02 00:44:49.139114 | orchestrator | 2026-02-02 00:44:27 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-02 00:44:49.139118 | orchestrator | 2026-02-02 00:44:27 | INFO  | Variable preparation completed 2026-02-02 00:44:49.139138 | orchestrator | 2026-02-02 00:44:29 | INFO  | Starting inventory overwrite handling 2026-02-02 00:44:49.139143 | orchestrator | 2026-02-02 00:44:29 | INFO  | Handling group overwrites in 99-overwrite 2026-02-02 00:44:49.139149 | orchestrator | 2026-02-02 00:44:29 | INFO  | Removing group frr:children from 60-generic 2026-02-02 00:44:49.139173 | orchestrator | 2026-02-02 00:44:29 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-02 00:44:49.139179 | orchestrator | 2026-02-02 00:44:29 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-02 00:44:49.139185 | orchestrator | 2026-02-02 00:44:29 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-02 00:44:49.139190 | orchestrator | 2026-02-02 00:44:29 | INFO  | Handling group overwrites in 20-roles 2026-02-02 00:44:49.139196 | orchestrator | 2026-02-02 00:44:29 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-02 00:44:49.139202 | orchestrator | 2026-02-02 00:44:29 | INFO  | Removed 5 group(s) in total 2026-02-02 00:44:49.139208 | orchestrator | 2026-02-02 00:44:29 | INFO  | Inventory overwrite handling completed 2026-02-02 00:44:49.139214 | orchestrator | 2026-02-02 00:44:30 | INFO  | Starting merge of inventory files 2026-02-02 00:44:49.139221 | orchestrator | 2026-02-02 00:44:30 | INFO  | Inventory files merged successfully 2026-02-02 00:44:49.139227 | orchestrator | 2026-02-02 00:44:36 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-02 00:44:49.139232 | orchestrator | 2026-02-02 00:44:47 | INFO  | Successfully wrote ClusterShell configuration 2026-02-02 00:44:49.139239 | orchestrator | [master 81d68cf] 2026-02-02-00-44 2026-02-02 00:44:49.139246 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-02 00:44:51.665615 | orchestrator | 2026-02-02 00:44:51 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-02-02 00:44:51.727540 | orchestrator | 2026-02-02 00:44:51 | INFO  | Task 5d07c0ea-c5f2-4f8d-aff3-4becce1e242f (ceph-create-lvm-devices) was prepared for execution. 2026-02-02 00:44:51.727656 | orchestrator | 2026-02-02 00:44:51 | INFO  | It takes a moment until task 5d07c0ea-c5f2-4f8d-aff3-4becce1e242f (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-02 00:45:03.886493 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-02 00:45:03.886628 | orchestrator | 2.16.14 2026-02-02 00:45:03.886651 | orchestrator | 2026-02-02 00:45:03.886668 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-02 00:45:03.886711 | orchestrator | 2026-02-02 00:45:03.886726 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-02 00:45:03.886743 | orchestrator | Monday 02 February 2026 00:44:56 +0000 (0:00:00.290) 0:00:00.290 ******* 2026-02-02 00:45:03.886759 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-02 00:45:03.886775 | orchestrator | 2026-02-02 00:45:03.886790 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-02 00:45:03.886806 | orchestrator | Monday 02 February 2026 00:44:56 +0000 (0:00:00.288) 0:00:00.578 ******* 2026-02-02 00:45:03.886821 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:45:03.886836 | orchestrator | 2026-02-02 00:45:03.886851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:03.886866 | orchestrator | Monday 02 February 2026 00:44:56 +0000 (0:00:00.220) 0:00:00.799 ******* 2026-02-02 00:45:03.886880 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-02 00:45:03.886895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-02 00:45:03.886910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-02 00:45:03.886926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-02 00:45:03.886994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-02 00:45:03.887009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-02 00:45:03.887026 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-02 00:45:03.887068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-02 00:45:03.887085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-02 00:45:03.887101 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-02 00:45:03.887116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-02 00:45:03.887131 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-02 00:45:03.887146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-02 00:45:03.887160 | orchestrator | 2026-02-02 00:45:03.887175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:03.887191 | orchestrator | Monday 02 February 2026 00:44:57 +0000 (0:00:00.449) 0:00:01.248 ******* 2026-02-02 00:45:03.887206 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.887222 | orchestrator | 2026-02-02 00:45:03.887238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:03.887253 | orchestrator | Monday 02 February 2026 00:44:57 +0000 (0:00:00.200) 0:00:01.448 ******* 2026-02-02 00:45:03.887269 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.887284 | orchestrator | 2026-02-02 00:45:03.887299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:03.887313 | orchestrator | Monday 02 February 2026 00:44:57 +0000 (0:00:00.180) 0:00:01.628 ******* 2026-02-02 00:45:03.887327 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.887342 | orchestrator | 2026-02-02 00:45:03.887357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:03.887372 | orchestrator | Monday 02 February 2026 00:44:57 +0000 (0:00:00.236) 0:00:01.864 ******* 2026-02-02 00:45:03.887388 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.887403 | orchestrator | 2026-02-02 00:45:03.887419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:03.887433 | orchestrator | Monday 02 February 2026 00:44:58 +0000 (0:00:00.202) 0:00:02.067 ******* 2026-02-02 00:45:03.887447 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.887463 | orchestrator | 2026-02-02 00:45:03.887478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:03.887513 | orchestrator | Monday 02 February 2026 00:44:58 +0000 (0:00:00.166) 0:00:02.234 ******* 2026-02-02 00:45:03.887528 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.887543 | orchestrator | 2026-02-02 00:45:03.887558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:03.887573 | orchestrator | Monday 02 February 2026 00:44:58 +0000 (0:00:00.168) 0:00:02.402 ******* 2026-02-02 00:45:03.887588 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.887602 | orchestrator | 2026-02-02 00:45:03.887618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:03.887633 | orchestrator | Monday 02 February 2026 00:44:58 +0000 (0:00:00.161) 0:00:02.564 ******* 2026-02-02 00:45:03.887647 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.887662 | orchestrator | 2026-02-02 00:45:03.887677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:03.887692 | orchestrator | Monday 02 February 2026 00:44:58 +0000 (0:00:00.223) 0:00:02.787 ******* 2026-02-02 00:45:03.887707 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad) 2026-02-02 00:45:03.887723 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad) 2026-02-02 00:45:03.887738 | orchestrator | 2026-02-02 00:45:03.887753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:03.887792 | orchestrator | Monday 02 February 2026 00:44:59 +0000 (0:00:00.369) 0:00:03.157 ******* 2026-02-02 00:45:03.887819 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_08cb0a0c-22c5-4be4-bae6-cc486e6cf4a8) 2026-02-02 00:45:03.887834 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_08cb0a0c-22c5-4be4-bae6-cc486e6cf4a8) 2026-02-02 00:45:03.887849 | orchestrator | 2026-02-02 00:45:03.887863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:03.887878 | orchestrator | Monday 02 February 2026 00:44:59 +0000 (0:00:00.614) 0:00:03.772 ******* 2026-02-02 00:45:03.887893 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2791ba27-caf1-4c6d-bd50-3e0320bbaa42) 2026-02-02 00:45:03.887908 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2791ba27-caf1-4c6d-bd50-3e0320bbaa42) 2026-02-02 00:45:03.887922 | orchestrator | 2026-02-02 00:45:03.887963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:03.887978 | orchestrator | Monday 02 February 2026 00:45:00 +0000 (0:00:00.719) 0:00:04.492 ******* 2026-02-02 00:45:03.887993 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b0ea612a-524a-49e0-9350-b51de64b4b0f) 2026-02-02 00:45:03.888008 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b0ea612a-524a-49e0-9350-b51de64b4b0f) 2026-02-02 00:45:03.888023 | orchestrator | 2026-02-02 00:45:03.888040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:03.888055 | orchestrator | Monday 02 February 2026 00:45:01 +0000 (0:00:01.016) 0:00:05.508 ******* 2026-02-02 00:45:03.888071 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-02 00:45:03.888087 | orchestrator | 2026-02-02 00:45:03.888103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:03.888118 | orchestrator | Monday 02 February 2026 00:45:01 +0000 (0:00:00.343) 0:00:05.852 ******* 2026-02-02 00:45:03.888133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-02 00:45:03.888148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-02 00:45:03.888163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-02 00:45:03.888178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-02 00:45:03.888192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-02 00:45:03.888214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-02 00:45:03.888230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-02 00:45:03.888244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-02 00:45:03.888258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-02 00:45:03.888273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-02 00:45:03.888288 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-02 00:45:03.888302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-02 00:45:03.888316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-02 00:45:03.888330 | orchestrator | 2026-02-02 00:45:03.888345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:03.888359 | orchestrator | Monday 02 February 2026 00:45:02 +0000 (0:00:00.457) 0:00:06.309 ******* 2026-02-02 00:45:03.888374 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.888388 | orchestrator | 2026-02-02 00:45:03.888403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:03.888417 | orchestrator | Monday 02 February 2026 00:45:02 +0000 (0:00:00.201) 0:00:06.510 ******* 2026-02-02 00:45:03.888472 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.888488 | orchestrator | 2026-02-02 00:45:03.888504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:03.888518 | orchestrator | Monday 02 February 2026 00:45:02 +0000 (0:00:00.227) 0:00:06.738 ******* 2026-02-02 00:45:03.888532 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.888546 | orchestrator | 2026-02-02 00:45:03.888561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:03.888577 | orchestrator | Monday 02 February 2026 00:45:02 +0000 (0:00:00.206) 0:00:06.944 ******* 2026-02-02 00:45:03.888591 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.888606 | orchestrator | 2026-02-02 00:45:03.888620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:03.888635 | orchestrator | Monday 02 February 2026 00:45:03 +0000 (0:00:00.219) 0:00:07.164 ******* 2026-02-02 00:45:03.888650 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.888663 | orchestrator | 2026-02-02 00:45:03.888678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:03.888691 | orchestrator | Monday 02 February 2026 00:45:03 +0000 (0:00:00.214) 0:00:07.378 ******* 2026-02-02 00:45:03.888704 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.888718 | orchestrator | 2026-02-02 00:45:03.888733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:03.888748 | orchestrator | Monday 02 February 2026 00:45:03 +0000 (0:00:00.216) 0:00:07.595 ******* 2026-02-02 00:45:03.888763 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:03.888776 | orchestrator | 2026-02-02 00:45:03.888801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:12.628973 | orchestrator | Monday 02 February 2026 00:45:03 +0000 (0:00:00.229) 0:00:07.825 ******* 2026-02-02 00:45:12.629116 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.629146 | orchestrator | 2026-02-02 00:45:12.629167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:12.629187 | orchestrator | Monday 02 February 2026 00:45:04 +0000 (0:00:00.188) 0:00:08.013 ******* 2026-02-02 00:45:12.629205 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-02 00:45:12.629224 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-02 00:45:12.629242 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-02 00:45:12.629261 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-02 00:45:12.629279 | orchestrator | 2026-02-02 00:45:12.629298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:12.629319 | orchestrator | Monday 02 February 2026 00:45:05 +0000 (0:00:01.139) 0:00:09.153 ******* 2026-02-02 00:45:12.629337 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.629355 | orchestrator | 2026-02-02 00:45:12.629373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:12.629392 | orchestrator | Monday 02 February 2026 00:45:05 +0000 (0:00:00.217) 0:00:09.371 ******* 2026-02-02 00:45:12.629411 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.629428 | orchestrator | 2026-02-02 00:45:12.629447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:12.629466 | orchestrator | Monday 02 February 2026 00:45:05 +0000 (0:00:00.235) 0:00:09.606 ******* 2026-02-02 00:45:12.629485 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.629503 | orchestrator | 2026-02-02 00:45:12.629521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:12.629541 | orchestrator | Monday 02 February 2026 00:45:05 +0000 (0:00:00.233) 0:00:09.840 ******* 2026-02-02 00:45:12.629560 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.629577 | orchestrator | 2026-02-02 00:45:12.629595 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-02 00:45:12.629613 | orchestrator | Monday 02 February 2026 00:45:06 +0000 (0:00:00.237) 0:00:10.078 ******* 2026-02-02 00:45:12.629631 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.629680 | orchestrator | 2026-02-02 00:45:12.629701 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-02 00:45:12.629719 | orchestrator | Monday 02 February 2026 00:45:06 +0000 (0:00:00.159) 0:00:10.238 ******* 2026-02-02 00:45:12.629737 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91c179ef-578a-54fb-a2b0-5b892bd3ac18'}}) 2026-02-02 00:45:12.629756 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '91730114-ee0c-5e20-9378-f20099298830'}}) 2026-02-02 00:45:12.629775 | orchestrator | 2026-02-02 00:45:12.629793 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-02 00:45:12.629811 | orchestrator | Monday 02 February 2026 00:45:06 +0000 (0:00:00.257) 0:00:10.495 ******* 2026-02-02 00:45:12.629832 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'}) 2026-02-02 00:45:12.629852 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'}) 2026-02-02 00:45:12.629870 | orchestrator | 2026-02-02 00:45:12.629889 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-02 00:45:12.629906 | orchestrator | Monday 02 February 2026 00:45:08 +0000 (0:00:01.967) 0:00:12.463 ******* 2026-02-02 00:45:12.629923 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:12.629973 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:12.629991 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.630009 | orchestrator | 2026-02-02 00:45:12.630096 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-02 00:45:12.630163 | orchestrator | Monday 02 February 2026 00:45:08 +0000 (0:00:00.166) 0:00:12.630 ******* 2026-02-02 00:45:12.630181 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'}) 2026-02-02 00:45:12.630201 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'}) 2026-02-02 00:45:12.630218 | orchestrator | 2026-02-02 00:45:12.630254 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-02 00:45:12.630273 | orchestrator | Monday 02 February 2026 00:45:10 +0000 (0:00:01.397) 0:00:14.027 ******* 2026-02-02 00:45:12.630291 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:12.630310 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:12.630328 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.630347 | orchestrator | 2026-02-02 00:45:12.630365 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-02 00:45:12.630383 | orchestrator | Monday 02 February 2026 00:45:10 +0000 (0:00:00.180) 0:00:14.208 ******* 2026-02-02 00:45:12.630430 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.630452 | orchestrator | 2026-02-02 00:45:12.630470 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-02 00:45:12.630489 | orchestrator | Monday 02 February 2026 00:45:10 +0000 (0:00:00.176) 0:00:14.385 ******* 2026-02-02 00:45:12.630507 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:12.630526 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:12.630563 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.630581 | orchestrator | 2026-02-02 00:45:12.630597 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-02 00:45:12.630615 | orchestrator | Monday 02 February 2026 00:45:10 +0000 (0:00:00.503) 0:00:14.888 ******* 2026-02-02 00:45:12.630631 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.630648 | orchestrator | 2026-02-02 00:45:12.630667 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-02 00:45:12.630685 | orchestrator | Monday 02 February 2026 00:45:11 +0000 (0:00:00.211) 0:00:15.099 ******* 2026-02-02 00:45:12.630703 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:12.630720 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:12.630739 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.630756 | orchestrator | 2026-02-02 00:45:12.630775 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-02 00:45:12.630795 | orchestrator | Monday 02 February 2026 00:45:11 +0000 (0:00:00.190) 0:00:15.290 ******* 2026-02-02 00:45:12.630813 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.630832 | orchestrator | 2026-02-02 00:45:12.630850 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-02 00:45:12.630868 | orchestrator | Monday 02 February 2026 00:45:11 +0000 (0:00:00.176) 0:00:15.466 ******* 2026-02-02 00:45:12.630886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:12.630914 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:12.630980 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.630993 | orchestrator | 2026-02-02 00:45:12.631005 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-02 00:45:12.631016 | orchestrator | Monday 02 February 2026 00:45:11 +0000 (0:00:00.207) 0:00:15.673 ******* 2026-02-02 00:45:12.631027 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:45:12.631038 | orchestrator | 2026-02-02 00:45:12.631049 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-02 00:45:12.631060 | orchestrator | Monday 02 February 2026 00:45:11 +0000 (0:00:00.167) 0:00:15.841 ******* 2026-02-02 00:45:12.631071 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:12.631082 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:12.631093 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.631104 | orchestrator | 2026-02-02 00:45:12.631115 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-02 00:45:12.631126 | orchestrator | Monday 02 February 2026 00:45:12 +0000 (0:00:00.179) 0:00:16.020 ******* 2026-02-02 00:45:12.631137 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:12.631147 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:12.631158 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.631169 | orchestrator | 2026-02-02 00:45:12.631180 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-02 00:45:12.631201 | orchestrator | Monday 02 February 2026 00:45:12 +0000 (0:00:00.191) 0:00:16.212 ******* 2026-02-02 00:45:12.631213 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:12.631224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:12.631235 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.631246 | orchestrator | 2026-02-02 00:45:12.631257 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-02 00:45:12.631268 | orchestrator | Monday 02 February 2026 00:45:12 +0000 (0:00:00.185) 0:00:16.397 ******* 2026-02-02 00:45:12.631278 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:12.631289 | orchestrator | 2026-02-02 00:45:12.631300 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-02 00:45:12.631324 | orchestrator | Monday 02 February 2026 00:45:12 +0000 (0:00:00.177) 0:00:16.574 ******* 2026-02-02 00:45:19.665240 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665304 | orchestrator | 2026-02-02 00:45:19.665313 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-02 00:45:19.665321 | orchestrator | Monday 02 February 2026 00:45:12 +0000 (0:00:00.152) 0:00:16.727 ******* 2026-02-02 00:45:19.665328 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665335 | orchestrator | 2026-02-02 00:45:19.665341 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-02 00:45:19.665348 | orchestrator | Monday 02 February 2026 00:45:12 +0000 (0:00:00.179) 0:00:16.907 ******* 2026-02-02 00:45:19.665354 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 00:45:19.665361 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-02 00:45:19.665368 | orchestrator | } 2026-02-02 00:45:19.665374 | orchestrator | 2026-02-02 00:45:19.665381 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-02 00:45:19.665387 | orchestrator | Monday 02 February 2026 00:45:13 +0000 (0:00:00.384) 0:00:17.292 ******* 2026-02-02 00:45:19.665394 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 00:45:19.665401 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-02 00:45:19.665407 | orchestrator | } 2026-02-02 00:45:19.665414 | orchestrator | 2026-02-02 00:45:19.665421 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-02 00:45:19.665427 | orchestrator | Monday 02 February 2026 00:45:13 +0000 (0:00:00.169) 0:00:17.462 ******* 2026-02-02 00:45:19.665434 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 00:45:19.665441 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-02 00:45:19.665448 | orchestrator | } 2026-02-02 00:45:19.665454 | orchestrator | 2026-02-02 00:45:19.665460 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-02 00:45:19.665466 | orchestrator | Monday 02 February 2026 00:45:13 +0000 (0:00:00.155) 0:00:17.617 ******* 2026-02-02 00:45:19.665473 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:45:19.665480 | orchestrator | 2026-02-02 00:45:19.665485 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-02 00:45:19.665492 | orchestrator | Monday 02 February 2026 00:45:14 +0000 (0:00:00.840) 0:00:18.457 ******* 2026-02-02 00:45:19.665498 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:45:19.665505 | orchestrator | 2026-02-02 00:45:19.665512 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-02 00:45:19.665518 | orchestrator | Monday 02 February 2026 00:45:15 +0000 (0:00:00.606) 0:00:19.064 ******* 2026-02-02 00:45:19.665525 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:45:19.665532 | orchestrator | 2026-02-02 00:45:19.665539 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-02 00:45:19.665546 | orchestrator | Monday 02 February 2026 00:45:15 +0000 (0:00:00.623) 0:00:19.687 ******* 2026-02-02 00:45:19.665552 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:45:19.665559 | orchestrator | 2026-02-02 00:45:19.665580 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-02 00:45:19.665587 | orchestrator | Monday 02 February 2026 00:45:15 +0000 (0:00:00.245) 0:00:19.932 ******* 2026-02-02 00:45:19.665593 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665600 | orchestrator | 2026-02-02 00:45:19.665606 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-02 00:45:19.665612 | orchestrator | Monday 02 February 2026 00:45:16 +0000 (0:00:00.144) 0:00:20.077 ******* 2026-02-02 00:45:19.665618 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665624 | orchestrator | 2026-02-02 00:45:19.665628 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-02 00:45:19.665632 | orchestrator | Monday 02 February 2026 00:45:16 +0000 (0:00:00.145) 0:00:20.223 ******* 2026-02-02 00:45:19.665635 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 00:45:19.665639 | orchestrator |  "vgs_report": { 2026-02-02 00:45:19.665643 | orchestrator |  "vg": [] 2026-02-02 00:45:19.665647 | orchestrator |  } 2026-02-02 00:45:19.665651 | orchestrator | } 2026-02-02 00:45:19.665654 | orchestrator | 2026-02-02 00:45:19.665658 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-02 00:45:19.665662 | orchestrator | Monday 02 February 2026 00:45:16 +0000 (0:00:00.200) 0:00:20.423 ******* 2026-02-02 00:45:19.665666 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665670 | orchestrator | 2026-02-02 00:45:19.665673 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-02 00:45:19.665677 | orchestrator | Monday 02 February 2026 00:45:16 +0000 (0:00:00.149) 0:00:20.572 ******* 2026-02-02 00:45:19.665681 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665685 | orchestrator | 2026-02-02 00:45:19.665692 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-02 00:45:19.665698 | orchestrator | Monday 02 February 2026 00:45:16 +0000 (0:00:00.194) 0:00:20.767 ******* 2026-02-02 00:45:19.665704 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665710 | orchestrator | 2026-02-02 00:45:19.665717 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-02 00:45:19.665723 | orchestrator | Monday 02 February 2026 00:45:17 +0000 (0:00:00.331) 0:00:21.099 ******* 2026-02-02 00:45:19.665730 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665736 | orchestrator | 2026-02-02 00:45:19.665742 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-02 00:45:19.665748 | orchestrator | Monday 02 February 2026 00:45:17 +0000 (0:00:00.155) 0:00:21.254 ******* 2026-02-02 00:45:19.665754 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665760 | orchestrator | 2026-02-02 00:45:19.665767 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-02 00:45:19.665773 | orchestrator | Monday 02 February 2026 00:45:17 +0000 (0:00:00.119) 0:00:21.374 ******* 2026-02-02 00:45:19.665779 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665784 | orchestrator | 2026-02-02 00:45:19.665788 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-02 00:45:19.665792 | orchestrator | Monday 02 February 2026 00:45:17 +0000 (0:00:00.128) 0:00:21.502 ******* 2026-02-02 00:45:19.665795 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665799 | orchestrator | 2026-02-02 00:45:19.665803 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-02 00:45:19.665807 | orchestrator | Monday 02 February 2026 00:45:17 +0000 (0:00:00.134) 0:00:21.637 ******* 2026-02-02 00:45:19.665819 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665823 | orchestrator | 2026-02-02 00:45:19.665827 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-02 00:45:19.665832 | orchestrator | Monday 02 February 2026 00:45:17 +0000 (0:00:00.129) 0:00:21.767 ******* 2026-02-02 00:45:19.665836 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665841 | orchestrator | 2026-02-02 00:45:19.665845 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-02 00:45:19.665854 | orchestrator | Monday 02 February 2026 00:45:17 +0000 (0:00:00.122) 0:00:21.889 ******* 2026-02-02 00:45:19.665858 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665863 | orchestrator | 2026-02-02 00:45:19.665867 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-02 00:45:19.665872 | orchestrator | Monday 02 February 2026 00:45:18 +0000 (0:00:00.131) 0:00:22.021 ******* 2026-02-02 00:45:19.665876 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665880 | orchestrator | 2026-02-02 00:45:19.665893 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-02 00:45:19.665898 | orchestrator | Monday 02 February 2026 00:45:18 +0000 (0:00:00.124) 0:00:22.145 ******* 2026-02-02 00:45:19.665902 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665907 | orchestrator | 2026-02-02 00:45:19.665911 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-02 00:45:19.665916 | orchestrator | Monday 02 February 2026 00:45:18 +0000 (0:00:00.136) 0:00:22.282 ******* 2026-02-02 00:45:19.665920 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665924 | orchestrator | 2026-02-02 00:45:19.665959 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-02 00:45:19.665964 | orchestrator | Monday 02 February 2026 00:45:18 +0000 (0:00:00.144) 0:00:22.426 ******* 2026-02-02 00:45:19.665969 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.665973 | orchestrator | 2026-02-02 00:45:19.665977 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-02 00:45:19.665981 | orchestrator | Monday 02 February 2026 00:45:18 +0000 (0:00:00.154) 0:00:22.581 ******* 2026-02-02 00:45:19.665986 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:19.665991 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:19.665996 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.666000 | orchestrator | 2026-02-02 00:45:19.666004 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-02 00:45:19.666034 | orchestrator | Monday 02 February 2026 00:45:18 +0000 (0:00:00.307) 0:00:22.889 ******* 2026-02-02 00:45:19.666039 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:19.666043 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:19.666048 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.666052 | orchestrator | 2026-02-02 00:45:19.666056 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-02 00:45:19.666061 | orchestrator | Monday 02 February 2026 00:45:19 +0000 (0:00:00.173) 0:00:23.062 ******* 2026-02-02 00:45:19.666065 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:19.666069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:19.666074 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.666078 | orchestrator | 2026-02-02 00:45:19.666082 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-02 00:45:19.666087 | orchestrator | Monday 02 February 2026 00:45:19 +0000 (0:00:00.160) 0:00:23.223 ******* 2026-02-02 00:45:19.666091 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:19.666096 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:19.666103 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.666107 | orchestrator | 2026-02-02 00:45:19.666112 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-02 00:45:19.666116 | orchestrator | Monday 02 February 2026 00:45:19 +0000 (0:00:00.146) 0:00:23.370 ******* 2026-02-02 00:45:19.666121 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:19.666125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:19.666129 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:19.666133 | orchestrator | 2026-02-02 00:45:19.666138 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-02 00:45:19.666142 | orchestrator | Monday 02 February 2026 00:45:19 +0000 (0:00:00.185) 0:00:23.555 ******* 2026-02-02 00:45:19.666150 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:25.185274 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:25.185409 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:25.185437 | orchestrator | 2026-02-02 00:45:25.185459 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-02 00:45:25.185482 | orchestrator | Monday 02 February 2026 00:45:19 +0000 (0:00:00.145) 0:00:23.701 ******* 2026-02-02 00:45:25.185502 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:25.185528 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:25.185552 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:25.185575 | orchestrator | 2026-02-02 00:45:25.185598 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-02 00:45:25.185621 | orchestrator | Monday 02 February 2026 00:45:19 +0000 (0:00:00.144) 0:00:23.846 ******* 2026-02-02 00:45:25.185644 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:25.185669 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:25.185694 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:25.185717 | orchestrator | 2026-02-02 00:45:25.185740 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-02 00:45:25.185764 | orchestrator | Monday 02 February 2026 00:45:20 +0000 (0:00:00.165) 0:00:24.011 ******* 2026-02-02 00:45:25.185796 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:45:25.185825 | orchestrator | 2026-02-02 00:45:25.185856 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-02 00:45:25.185887 | orchestrator | Monday 02 February 2026 00:45:20 +0000 (0:00:00.510) 0:00:24.522 ******* 2026-02-02 00:45:25.185919 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:45:25.185982 | orchestrator | 2026-02-02 00:45:25.186001 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-02 00:45:25.186127 | orchestrator | Monday 02 February 2026 00:45:21 +0000 (0:00:00.528) 0:00:25.051 ******* 2026-02-02 00:45:25.186153 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:45:25.186177 | orchestrator | 2026-02-02 00:45:25.186204 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-02 00:45:25.186225 | orchestrator | Monday 02 February 2026 00:45:21 +0000 (0:00:00.168) 0:00:25.220 ******* 2026-02-02 00:45:25.186281 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'vg_name': 'ceph-91730114-ee0c-5e20-9378-f20099298830'}) 2026-02-02 00:45:25.186306 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'vg_name': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'}) 2026-02-02 00:45:25.186327 | orchestrator | 2026-02-02 00:45:25.186349 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-02 00:45:25.186369 | orchestrator | Monday 02 February 2026 00:45:21 +0000 (0:00:00.184) 0:00:25.404 ******* 2026-02-02 00:45:25.186390 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:25.186412 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:25.186433 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:25.186454 | orchestrator | 2026-02-02 00:45:25.186476 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-02 00:45:25.186498 | orchestrator | Monday 02 February 2026 00:45:21 +0000 (0:00:00.388) 0:00:25.793 ******* 2026-02-02 00:45:25.186519 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:25.186541 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:25.186561 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:25.186579 | orchestrator | 2026-02-02 00:45:25.186598 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-02 00:45:25.186617 | orchestrator | Monday 02 February 2026 00:45:22 +0000 (0:00:00.171) 0:00:25.964 ******* 2026-02-02 00:45:25.186636 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'})  2026-02-02 00:45:25.186655 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'})  2026-02-02 00:45:25.186673 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:45:25.186692 | orchestrator | 2026-02-02 00:45:25.186710 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-02 00:45:25.186728 | orchestrator | Monday 02 February 2026 00:45:22 +0000 (0:00:00.174) 0:00:26.139 ******* 2026-02-02 00:45:25.186780 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 00:45:25.186802 | orchestrator |  "lvm_report": { 2026-02-02 00:45:25.186823 | orchestrator |  "lv": [ 2026-02-02 00:45:25.186845 | orchestrator |  { 2026-02-02 00:45:25.186865 | orchestrator |  "lv_name": "osd-block-91730114-ee0c-5e20-9378-f20099298830", 2026-02-02 00:45:25.186887 | orchestrator |  "vg_name": "ceph-91730114-ee0c-5e20-9378-f20099298830" 2026-02-02 00:45:25.186910 | orchestrator |  }, 2026-02-02 00:45:25.186956 | orchestrator |  { 2026-02-02 00:45:25.186980 | orchestrator |  "lv_name": "osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18", 2026-02-02 00:45:25.187001 | orchestrator |  "vg_name": "ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18" 2026-02-02 00:45:25.187018 | orchestrator |  } 2026-02-02 00:45:25.187037 | orchestrator |  ], 2026-02-02 00:45:25.187057 | orchestrator |  "pv": [ 2026-02-02 00:45:25.187077 | orchestrator |  { 2026-02-02 00:45:25.187097 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-02 00:45:25.187119 | orchestrator |  "vg_name": "ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18" 2026-02-02 00:45:25.187136 | orchestrator |  }, 2026-02-02 00:45:25.187154 | orchestrator |  { 2026-02-02 00:45:25.187191 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-02 00:45:25.187214 | orchestrator |  "vg_name": "ceph-91730114-ee0c-5e20-9378-f20099298830" 2026-02-02 00:45:25.187236 | orchestrator |  } 2026-02-02 00:45:25.187257 | orchestrator |  ] 2026-02-02 00:45:25.187279 | orchestrator |  } 2026-02-02 00:45:25.187300 | orchestrator | } 2026-02-02 00:45:25.187321 | orchestrator | 2026-02-02 00:45:25.187342 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-02 00:45:25.187363 | orchestrator | 2026-02-02 00:45:25.187384 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-02 00:45:25.187406 | orchestrator | Monday 02 February 2026 00:45:22 +0000 (0:00:00.304) 0:00:26.443 ******* 2026-02-02 00:45:25.187428 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-02 00:45:25.187451 | orchestrator | 2026-02-02 00:45:25.187473 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-02 00:45:25.187493 | orchestrator | Monday 02 February 2026 00:45:22 +0000 (0:00:00.248) 0:00:26.692 ******* 2026-02-02 00:45:25.187513 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:45:25.187535 | orchestrator | 2026-02-02 00:45:25.187552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:25.187569 | orchestrator | Monday 02 February 2026 00:45:22 +0000 (0:00:00.221) 0:00:26.914 ******* 2026-02-02 00:45:25.187587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-02 00:45:25.187604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-02 00:45:25.187621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-02 00:45:25.187641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-02 00:45:25.187663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-02 00:45:25.187685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-02 00:45:25.187707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-02 00:45:25.187729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-02 00:45:25.187749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-02 00:45:25.187768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-02 00:45:25.187789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-02 00:45:25.187810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-02 00:45:25.187829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-02 00:45:25.187849 | orchestrator | 2026-02-02 00:45:25.187870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:25.187890 | orchestrator | Monday 02 February 2026 00:45:23 +0000 (0:00:00.463) 0:00:27.378 ******* 2026-02-02 00:45:25.187910 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:25.187955 | orchestrator | 2026-02-02 00:45:25.187977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:25.188019 | orchestrator | Monday 02 February 2026 00:45:23 +0000 (0:00:00.203) 0:00:27.582 ******* 2026-02-02 00:45:25.188043 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:25.188065 | orchestrator | 2026-02-02 00:45:25.188087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:25.188108 | orchestrator | Monday 02 February 2026 00:45:23 +0000 (0:00:00.217) 0:00:27.799 ******* 2026-02-02 00:45:25.188130 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:25.188154 | orchestrator | 2026-02-02 00:45:25.188175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:25.188215 | orchestrator | Monday 02 February 2026 00:45:24 +0000 (0:00:00.669) 0:00:28.469 ******* 2026-02-02 00:45:25.188236 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:25.188257 | orchestrator | 2026-02-02 00:45:25.188279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:25.188301 | orchestrator | Monday 02 February 2026 00:45:24 +0000 (0:00:00.208) 0:00:28.678 ******* 2026-02-02 00:45:25.188322 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:25.188341 | orchestrator | 2026-02-02 00:45:25.188358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:25.188376 | orchestrator | Monday 02 February 2026 00:45:24 +0000 (0:00:00.233) 0:00:28.911 ******* 2026-02-02 00:45:25.188394 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:25.188410 | orchestrator | 2026-02-02 00:45:25.188443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:37.025071 | orchestrator | Monday 02 February 2026 00:45:25 +0000 (0:00:00.221) 0:00:29.132 ******* 2026-02-02 00:45:37.025172 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.025187 | orchestrator | 2026-02-02 00:45:37.025197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:37.025207 | orchestrator | Monday 02 February 2026 00:45:25 +0000 (0:00:00.225) 0:00:29.357 ******* 2026-02-02 00:45:37.025217 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.025226 | orchestrator | 2026-02-02 00:45:37.025236 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:37.025245 | orchestrator | Monday 02 February 2026 00:45:25 +0000 (0:00:00.243) 0:00:29.601 ******* 2026-02-02 00:45:37.025254 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf) 2026-02-02 00:45:37.025265 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf) 2026-02-02 00:45:37.025274 | orchestrator | 2026-02-02 00:45:37.025283 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:37.025292 | orchestrator | Monday 02 February 2026 00:45:26 +0000 (0:00:00.432) 0:00:30.033 ******* 2026-02-02 00:45:37.025301 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8be21885-29f7-4026-87ce-cd032f624f70) 2026-02-02 00:45:37.025310 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8be21885-29f7-4026-87ce-cd032f624f70) 2026-02-02 00:45:37.025319 | orchestrator | 2026-02-02 00:45:37.025328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:37.025353 | orchestrator | Monday 02 February 2026 00:45:26 +0000 (0:00:00.470) 0:00:30.504 ******* 2026-02-02 00:45:37.025369 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_78cf2400-96ef-4814-8ef8-9c5b7903f7b2) 2026-02-02 00:45:37.025385 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_78cf2400-96ef-4814-8ef8-9c5b7903f7b2) 2026-02-02 00:45:37.025400 | orchestrator | 2026-02-02 00:45:37.025414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:37.025429 | orchestrator | Monday 02 February 2026 00:45:27 +0000 (0:00:00.476) 0:00:30.981 ******* 2026-02-02 00:45:37.025454 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2465f817-ac52-4990-8055-49becb307e2f) 2026-02-02 00:45:37.025464 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2465f817-ac52-4990-8055-49becb307e2f) 2026-02-02 00:45:37.025473 | orchestrator | 2026-02-02 00:45:37.025481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:37.025491 | orchestrator | Monday 02 February 2026 00:45:27 +0000 (0:00:00.704) 0:00:31.685 ******* 2026-02-02 00:45:37.025500 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-02 00:45:37.025508 | orchestrator | 2026-02-02 00:45:37.025517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:37.025526 | orchestrator | Monday 02 February 2026 00:45:28 +0000 (0:00:00.617) 0:00:32.303 ******* 2026-02-02 00:45:37.025557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-02 00:45:37.025569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-02 00:45:37.025579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-02 00:45:37.025588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-02 00:45:37.025599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-02 00:45:37.025609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-02 00:45:37.025619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-02 00:45:37.025629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-02 00:45:37.025639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-02 00:45:37.025649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-02 00:45:37.025659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-02 00:45:37.025669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-02 00:45:37.025680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-02 00:45:37.025691 | orchestrator | 2026-02-02 00:45:37.025701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:37.025712 | orchestrator | Monday 02 February 2026 00:45:29 +0000 (0:00:00.697) 0:00:33.000 ******* 2026-02-02 00:45:37.025722 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.025731 | orchestrator | 2026-02-02 00:45:37.025740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:37.025749 | orchestrator | Monday 02 February 2026 00:45:29 +0000 (0:00:00.231) 0:00:33.231 ******* 2026-02-02 00:45:37.025757 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.025766 | orchestrator | 2026-02-02 00:45:37.025775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:37.025784 | orchestrator | Monday 02 February 2026 00:45:29 +0000 (0:00:00.240) 0:00:33.472 ******* 2026-02-02 00:45:37.025792 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.025801 | orchestrator | 2026-02-02 00:45:37.025827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:37.025914 | orchestrator | Monday 02 February 2026 00:45:29 +0000 (0:00:00.197) 0:00:33.670 ******* 2026-02-02 00:45:37.025951 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.025965 | orchestrator | 2026-02-02 00:45:37.025980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:37.025994 | orchestrator | Monday 02 February 2026 00:45:29 +0000 (0:00:00.207) 0:00:33.877 ******* 2026-02-02 00:45:37.026007 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.026086 | orchestrator | 2026-02-02 00:45:37.026100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:37.026113 | orchestrator | Monday 02 February 2026 00:45:30 +0000 (0:00:00.218) 0:00:34.096 ******* 2026-02-02 00:45:37.026127 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.026141 | orchestrator | 2026-02-02 00:45:37.026155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:37.026169 | orchestrator | Monday 02 February 2026 00:45:30 +0000 (0:00:00.226) 0:00:34.322 ******* 2026-02-02 00:45:37.026183 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.026198 | orchestrator | 2026-02-02 00:45:37.026212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:37.026227 | orchestrator | Monday 02 February 2026 00:45:30 +0000 (0:00:00.224) 0:00:34.547 ******* 2026-02-02 00:45:37.026258 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.026273 | orchestrator | 2026-02-02 00:45:37.026285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:37.026294 | orchestrator | Monday 02 February 2026 00:45:30 +0000 (0:00:00.236) 0:00:34.784 ******* 2026-02-02 00:45:37.026303 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-02 00:45:37.026312 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-02 00:45:37.026321 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-02 00:45:37.026330 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-02 00:45:37.026339 | orchestrator | 2026-02-02 00:45:37.026348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:37.026356 | orchestrator | Monday 02 February 2026 00:45:31 +0000 (0:00:01.026) 0:00:35.811 ******* 2026-02-02 00:45:37.026365 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.026374 | orchestrator | 2026-02-02 00:45:37.026383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:37.026391 | orchestrator | Monday 02 February 2026 00:45:32 +0000 (0:00:00.237) 0:00:36.048 ******* 2026-02-02 00:45:37.026407 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.026416 | orchestrator | 2026-02-02 00:45:37.026425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:37.026434 | orchestrator | Monday 02 February 2026 00:45:32 +0000 (0:00:00.768) 0:00:36.817 ******* 2026-02-02 00:45:37.026443 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.026451 | orchestrator | 2026-02-02 00:45:37.026460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:37.026469 | orchestrator | Monday 02 February 2026 00:45:33 +0000 (0:00:00.205) 0:00:37.023 ******* 2026-02-02 00:45:37.026590 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.026602 | orchestrator | 2026-02-02 00:45:37.026611 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-02 00:45:37.026620 | orchestrator | Monday 02 February 2026 00:45:33 +0000 (0:00:00.221) 0:00:37.244 ******* 2026-02-02 00:45:37.026629 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.026637 | orchestrator | 2026-02-02 00:45:37.026646 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-02 00:45:37.026655 | orchestrator | Monday 02 February 2026 00:45:33 +0000 (0:00:00.145) 0:00:37.390 ******* 2026-02-02 00:45:37.026665 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '604951f0-1bde-54b3-957a-2369560b0fa2'}}) 2026-02-02 00:45:37.026674 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edd20676-fc89-5b2b-b977-99722e90cce2'}}) 2026-02-02 00:45:37.026683 | orchestrator | 2026-02-02 00:45:37.026692 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-02 00:45:37.026701 | orchestrator | Monday 02 February 2026 00:45:33 +0000 (0:00:00.194) 0:00:37.584 ******* 2026-02-02 00:45:37.026711 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'}) 2026-02-02 00:45:37.026721 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'}) 2026-02-02 00:45:37.026730 | orchestrator | 2026-02-02 00:45:37.026739 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-02 00:45:37.026747 | orchestrator | Monday 02 February 2026 00:45:35 +0000 (0:00:01.896) 0:00:39.481 ******* 2026-02-02 00:45:37.026756 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:37.026766 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:37.026785 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:37.026794 | orchestrator | 2026-02-02 00:45:37.026803 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-02 00:45:37.026812 | orchestrator | Monday 02 February 2026 00:45:35 +0000 (0:00:00.162) 0:00:39.644 ******* 2026-02-02 00:45:37.026820 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'}) 2026-02-02 00:45:37.026842 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'}) 2026-02-02 00:45:43.480862 | orchestrator | 2026-02-02 00:45:43.481056 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-02 00:45:43.481169 | orchestrator | Monday 02 February 2026 00:45:37 +0000 (0:00:01.434) 0:00:41.078 ******* 2026-02-02 00:45:43.481190 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:43.481210 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:43.481228 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.481248 | orchestrator | 2026-02-02 00:45:43.481267 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-02 00:45:43.481285 | orchestrator | Monday 02 February 2026 00:45:37 +0000 (0:00:00.164) 0:00:41.243 ******* 2026-02-02 00:45:43.481304 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.481321 | orchestrator | 2026-02-02 00:45:43.481339 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-02 00:45:43.481357 | orchestrator | Monday 02 February 2026 00:45:37 +0000 (0:00:00.146) 0:00:41.389 ******* 2026-02-02 00:45:43.481375 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:43.481393 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:43.481412 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.481431 | orchestrator | 2026-02-02 00:45:43.481449 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-02 00:45:43.481469 | orchestrator | Monday 02 February 2026 00:45:37 +0000 (0:00:00.179) 0:00:41.569 ******* 2026-02-02 00:45:43.481488 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.481508 | orchestrator | 2026-02-02 00:45:43.481526 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-02 00:45:43.481545 | orchestrator | Monday 02 February 2026 00:45:37 +0000 (0:00:00.144) 0:00:41.713 ******* 2026-02-02 00:45:43.481564 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:43.481583 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:43.481600 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.481618 | orchestrator | 2026-02-02 00:45:43.481636 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-02 00:45:43.481654 | orchestrator | Monday 02 February 2026 00:45:38 +0000 (0:00:00.417) 0:00:42.131 ******* 2026-02-02 00:45:43.481671 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.481689 | orchestrator | 2026-02-02 00:45:43.481707 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-02 00:45:43.481725 | orchestrator | Monday 02 February 2026 00:45:38 +0000 (0:00:00.154) 0:00:42.285 ******* 2026-02-02 00:45:43.481743 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:43.481790 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:43.481809 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.481828 | orchestrator | 2026-02-02 00:45:43.481846 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-02 00:45:43.481882 | orchestrator | Monday 02 February 2026 00:45:38 +0000 (0:00:00.198) 0:00:42.484 ******* 2026-02-02 00:45:43.481902 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:45:43.481921 | orchestrator | 2026-02-02 00:45:43.481962 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-02 00:45:43.481982 | orchestrator | Monday 02 February 2026 00:45:38 +0000 (0:00:00.169) 0:00:42.654 ******* 2026-02-02 00:45:43.482002 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:43.482085 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:43.482106 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.482125 | orchestrator | 2026-02-02 00:45:43.482144 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-02 00:45:43.482163 | orchestrator | Monday 02 February 2026 00:45:38 +0000 (0:00:00.176) 0:00:42.831 ******* 2026-02-02 00:45:43.482183 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:43.482203 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:43.482222 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.482243 | orchestrator | 2026-02-02 00:45:43.482264 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-02 00:45:43.482309 | orchestrator | Monday 02 February 2026 00:45:39 +0000 (0:00:00.198) 0:00:43.029 ******* 2026-02-02 00:45:43.482331 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:43.482349 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:43.482369 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.482388 | orchestrator | 2026-02-02 00:45:43.482409 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-02 00:45:43.482428 | orchestrator | Monday 02 February 2026 00:45:39 +0000 (0:00:00.180) 0:00:43.209 ******* 2026-02-02 00:45:43.482446 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.482466 | orchestrator | 2026-02-02 00:45:43.482485 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-02 00:45:43.482503 | orchestrator | Monday 02 February 2026 00:45:39 +0000 (0:00:00.174) 0:00:43.384 ******* 2026-02-02 00:45:43.482518 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.482529 | orchestrator | 2026-02-02 00:45:43.482540 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-02 00:45:43.482551 | orchestrator | Monday 02 February 2026 00:45:39 +0000 (0:00:00.167) 0:00:43.551 ******* 2026-02-02 00:45:43.482562 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.482573 | orchestrator | 2026-02-02 00:45:43.482584 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-02 00:45:43.482595 | orchestrator | Monday 02 February 2026 00:45:39 +0000 (0:00:00.170) 0:00:43.722 ******* 2026-02-02 00:45:43.482606 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 00:45:43.482617 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-02 00:45:43.482644 | orchestrator | } 2026-02-02 00:45:43.482655 | orchestrator | 2026-02-02 00:45:43.482666 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-02 00:45:43.482677 | orchestrator | Monday 02 February 2026 00:45:39 +0000 (0:00:00.166) 0:00:43.888 ******* 2026-02-02 00:45:43.482688 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 00:45:43.482699 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-02 00:45:43.482710 | orchestrator | } 2026-02-02 00:45:43.482721 | orchestrator | 2026-02-02 00:45:43.482739 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-02 00:45:43.482750 | orchestrator | Monday 02 February 2026 00:45:40 +0000 (0:00:00.163) 0:00:44.052 ******* 2026-02-02 00:45:43.482761 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 00:45:43.482772 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-02 00:45:43.482784 | orchestrator | } 2026-02-02 00:45:43.482795 | orchestrator | 2026-02-02 00:45:43.482806 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-02 00:45:43.482817 | orchestrator | Monday 02 February 2026 00:45:40 +0000 (0:00:00.431) 0:00:44.484 ******* 2026-02-02 00:45:43.482828 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:45:43.482839 | orchestrator | 2026-02-02 00:45:43.482850 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-02 00:45:43.482860 | orchestrator | Monday 02 February 2026 00:45:41 +0000 (0:00:00.583) 0:00:45.067 ******* 2026-02-02 00:45:43.482871 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:45:43.482882 | orchestrator | 2026-02-02 00:45:43.482893 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-02 00:45:43.482904 | orchestrator | Monday 02 February 2026 00:45:41 +0000 (0:00:00.564) 0:00:45.632 ******* 2026-02-02 00:45:43.482913 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:45:43.482921 | orchestrator | 2026-02-02 00:45:43.482966 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-02 00:45:43.482975 | orchestrator | Monday 02 February 2026 00:45:42 +0000 (0:00:00.538) 0:00:46.170 ******* 2026-02-02 00:45:43.482982 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:45:43.482996 | orchestrator | 2026-02-02 00:45:43.483009 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-02 00:45:43.483017 | orchestrator | Monday 02 February 2026 00:45:42 +0000 (0:00:00.160) 0:00:46.331 ******* 2026-02-02 00:45:43.483026 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.483033 | orchestrator | 2026-02-02 00:45:43.483042 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-02 00:45:43.483056 | orchestrator | Monday 02 February 2026 00:45:42 +0000 (0:00:00.141) 0:00:46.472 ******* 2026-02-02 00:45:43.483068 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.483079 | orchestrator | 2026-02-02 00:45:43.483092 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-02 00:45:43.483102 | orchestrator | Monday 02 February 2026 00:45:42 +0000 (0:00:00.143) 0:00:46.615 ******* 2026-02-02 00:45:43.483116 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 00:45:43.483130 | orchestrator |  "vgs_report": { 2026-02-02 00:45:43.483139 | orchestrator |  "vg": [] 2026-02-02 00:45:43.483146 | orchestrator |  } 2026-02-02 00:45:43.483155 | orchestrator | } 2026-02-02 00:45:43.483163 | orchestrator | 2026-02-02 00:45:43.483171 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-02 00:45:43.483179 | orchestrator | Monday 02 February 2026 00:45:42 +0000 (0:00:00.161) 0:00:46.777 ******* 2026-02-02 00:45:43.483187 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.483195 | orchestrator | 2026-02-02 00:45:43.483203 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-02 00:45:43.483211 | orchestrator | Monday 02 February 2026 00:45:42 +0000 (0:00:00.155) 0:00:46.933 ******* 2026-02-02 00:45:43.483218 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.483226 | orchestrator | 2026-02-02 00:45:43.483234 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-02 00:45:43.483248 | orchestrator | Monday 02 February 2026 00:45:43 +0000 (0:00:00.154) 0:00:47.087 ******* 2026-02-02 00:45:43.483256 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.483263 | orchestrator | 2026-02-02 00:45:43.483271 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-02 00:45:43.483279 | orchestrator | Monday 02 February 2026 00:45:43 +0000 (0:00:00.182) 0:00:47.269 ******* 2026-02-02 00:45:43.483287 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:43.483295 | orchestrator | 2026-02-02 00:45:43.483311 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-02 00:45:48.204536 | orchestrator | Monday 02 February 2026 00:45:43 +0000 (0:00:00.160) 0:00:47.430 ******* 2026-02-02 00:45:48.204641 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.204658 | orchestrator | 2026-02-02 00:45:48.204671 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-02 00:45:48.204684 | orchestrator | Monday 02 February 2026 00:45:43 +0000 (0:00:00.336) 0:00:47.766 ******* 2026-02-02 00:45:48.204695 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.204706 | orchestrator | 2026-02-02 00:45:48.204718 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-02 00:45:48.204729 | orchestrator | Monday 02 February 2026 00:45:43 +0000 (0:00:00.118) 0:00:47.885 ******* 2026-02-02 00:45:48.204740 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.204751 | orchestrator | 2026-02-02 00:45:48.204762 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-02 00:45:48.204773 | orchestrator | Monday 02 February 2026 00:45:44 +0000 (0:00:00.146) 0:00:48.032 ******* 2026-02-02 00:45:48.204784 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.204795 | orchestrator | 2026-02-02 00:45:48.204806 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-02 00:45:48.204817 | orchestrator | Monday 02 February 2026 00:45:44 +0000 (0:00:00.147) 0:00:48.179 ******* 2026-02-02 00:45:48.204828 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.204839 | orchestrator | 2026-02-02 00:45:48.204850 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-02 00:45:48.204861 | orchestrator | Monday 02 February 2026 00:45:44 +0000 (0:00:00.137) 0:00:48.317 ******* 2026-02-02 00:45:48.204872 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.204883 | orchestrator | 2026-02-02 00:45:48.204894 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-02 00:45:48.204916 | orchestrator | Monday 02 February 2026 00:45:44 +0000 (0:00:00.138) 0:00:48.456 ******* 2026-02-02 00:45:48.204978 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.204992 | orchestrator | 2026-02-02 00:45:48.205003 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-02 00:45:48.205014 | orchestrator | Monday 02 February 2026 00:45:44 +0000 (0:00:00.139) 0:00:48.596 ******* 2026-02-02 00:45:48.205043 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.205056 | orchestrator | 2026-02-02 00:45:48.205069 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-02 00:45:48.205082 | orchestrator | Monday 02 February 2026 00:45:44 +0000 (0:00:00.119) 0:00:48.715 ******* 2026-02-02 00:45:48.205094 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.205107 | orchestrator | 2026-02-02 00:45:48.205120 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-02 00:45:48.205134 | orchestrator | Monday 02 February 2026 00:45:44 +0000 (0:00:00.124) 0:00:48.839 ******* 2026-02-02 00:45:48.205147 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.205159 | orchestrator | 2026-02-02 00:45:48.205172 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-02 00:45:48.205185 | orchestrator | Monday 02 February 2026 00:45:45 +0000 (0:00:00.128) 0:00:48.968 ******* 2026-02-02 00:45:48.205199 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:48.205235 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:48.205248 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.205260 | orchestrator | 2026-02-02 00:45:48.205273 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-02 00:45:48.205286 | orchestrator | Monday 02 February 2026 00:45:45 +0000 (0:00:00.140) 0:00:49.109 ******* 2026-02-02 00:45:48.205300 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:48.205313 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:48.205326 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.205338 | orchestrator | 2026-02-02 00:45:48.205350 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-02 00:45:48.205363 | orchestrator | Monday 02 February 2026 00:45:45 +0000 (0:00:00.170) 0:00:49.279 ******* 2026-02-02 00:45:48.205375 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:48.205389 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:48.205401 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.205413 | orchestrator | 2026-02-02 00:45:48.205424 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-02 00:45:48.205435 | orchestrator | Monday 02 February 2026 00:45:45 +0000 (0:00:00.152) 0:00:49.432 ******* 2026-02-02 00:45:48.205446 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:48.205457 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:48.205469 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.205480 | orchestrator | 2026-02-02 00:45:48.205508 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-02 00:45:48.205521 | orchestrator | Monday 02 February 2026 00:45:45 +0000 (0:00:00.300) 0:00:49.732 ******* 2026-02-02 00:45:48.205532 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:48.205543 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:48.205554 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.205565 | orchestrator | 2026-02-02 00:45:48.205576 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-02 00:45:48.205587 | orchestrator | Monday 02 February 2026 00:45:45 +0000 (0:00:00.186) 0:00:49.918 ******* 2026-02-02 00:45:48.205598 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:48.205609 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:48.205620 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.205631 | orchestrator | 2026-02-02 00:45:48.205642 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-02 00:45:48.205653 | orchestrator | Monday 02 February 2026 00:45:46 +0000 (0:00:00.174) 0:00:50.093 ******* 2026-02-02 00:45:48.205664 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:48.205682 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:48.205693 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.205704 | orchestrator | 2026-02-02 00:45:48.205715 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-02 00:45:48.205726 | orchestrator | Monday 02 February 2026 00:45:46 +0000 (0:00:00.174) 0:00:50.267 ******* 2026-02-02 00:45:48.205737 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:48.205748 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:48.205759 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.205770 | orchestrator | 2026-02-02 00:45:48.205781 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-02 00:45:48.205792 | orchestrator | Monday 02 February 2026 00:45:46 +0000 (0:00:00.183) 0:00:50.451 ******* 2026-02-02 00:45:48.205803 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:45:48.205814 | orchestrator | 2026-02-02 00:45:48.205824 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-02 00:45:48.205843 | orchestrator | Monday 02 February 2026 00:45:47 +0000 (0:00:00.579) 0:00:51.030 ******* 2026-02-02 00:45:48.205863 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:45:48.205882 | orchestrator | 2026-02-02 00:45:48.205899 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-02 00:45:48.205917 | orchestrator | Monday 02 February 2026 00:45:47 +0000 (0:00:00.544) 0:00:51.575 ******* 2026-02-02 00:45:48.205956 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:45:48.205974 | orchestrator | 2026-02-02 00:45:48.205992 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-02 00:45:48.206010 | orchestrator | Monday 02 February 2026 00:45:47 +0000 (0:00:00.150) 0:00:51.726 ******* 2026-02-02 00:45:48.206100 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'vg_name': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'}) 2026-02-02 00:45:48.206113 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'vg_name': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'}) 2026-02-02 00:45:48.206124 | orchestrator | 2026-02-02 00:45:48.206135 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-02 00:45:48.206146 | orchestrator | Monday 02 February 2026 00:45:47 +0000 (0:00:00.199) 0:00:51.925 ******* 2026-02-02 00:45:48.206157 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:48.206168 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:48.206179 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:48.206190 | orchestrator | 2026-02-02 00:45:48.206201 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-02 00:45:48.206212 | orchestrator | Monday 02 February 2026 00:45:48 +0000 (0:00:00.158) 0:00:52.084 ******* 2026-02-02 00:45:48.206223 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:48.206244 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:54.836985 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:54.837104 | orchestrator | 2026-02-02 00:45:54.837145 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-02 00:45:54.837159 | orchestrator | Monday 02 February 2026 00:45:48 +0000 (0:00:00.149) 0:00:52.233 ******* 2026-02-02 00:45:54.837171 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'})  2026-02-02 00:45:54.837184 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'})  2026-02-02 00:45:54.837195 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:45:54.837206 | orchestrator | 2026-02-02 00:45:54.837218 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-02 00:45:54.837229 | orchestrator | Monday 02 February 2026 00:45:48 +0000 (0:00:00.174) 0:00:52.408 ******* 2026-02-02 00:45:54.837240 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 00:45:54.837251 | orchestrator |  "lvm_report": { 2026-02-02 00:45:54.837263 | orchestrator |  "lv": [ 2026-02-02 00:45:54.837273 | orchestrator |  { 2026-02-02 00:45:54.837284 | orchestrator |  "lv_name": "osd-block-604951f0-1bde-54b3-957a-2369560b0fa2", 2026-02-02 00:45:54.837296 | orchestrator |  "vg_name": "ceph-604951f0-1bde-54b3-957a-2369560b0fa2" 2026-02-02 00:45:54.837307 | orchestrator |  }, 2026-02-02 00:45:54.837318 | orchestrator |  { 2026-02-02 00:45:54.837328 | orchestrator |  "lv_name": "osd-block-edd20676-fc89-5b2b-b977-99722e90cce2", 2026-02-02 00:45:54.837345 | orchestrator |  "vg_name": "ceph-edd20676-fc89-5b2b-b977-99722e90cce2" 2026-02-02 00:45:54.837362 | orchestrator |  } 2026-02-02 00:45:54.837381 | orchestrator |  ], 2026-02-02 00:45:54.837398 | orchestrator |  "pv": [ 2026-02-02 00:45:54.837415 | orchestrator |  { 2026-02-02 00:45:54.837435 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-02 00:45:54.837463 | orchestrator |  "vg_name": "ceph-604951f0-1bde-54b3-957a-2369560b0fa2" 2026-02-02 00:45:54.837482 | orchestrator |  }, 2026-02-02 00:45:54.837501 | orchestrator |  { 2026-02-02 00:45:54.837519 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-02 00:45:54.837538 | orchestrator |  "vg_name": "ceph-edd20676-fc89-5b2b-b977-99722e90cce2" 2026-02-02 00:45:54.837561 | orchestrator |  } 2026-02-02 00:45:54.837583 | orchestrator |  ] 2026-02-02 00:45:54.837601 | orchestrator |  } 2026-02-02 00:45:54.837614 | orchestrator | } 2026-02-02 00:45:54.837627 | orchestrator | 2026-02-02 00:45:54.837638 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-02 00:45:54.837650 | orchestrator | 2026-02-02 00:45:54.837661 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-02 00:45:54.837672 | orchestrator | Monday 02 February 2026 00:45:48 +0000 (0:00:00.513) 0:00:52.921 ******* 2026-02-02 00:45:54.837684 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-02 00:45:54.837695 | orchestrator | 2026-02-02 00:45:54.837706 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-02 00:45:54.837717 | orchestrator | Monday 02 February 2026 00:45:49 +0000 (0:00:00.398) 0:00:53.320 ******* 2026-02-02 00:45:54.837728 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:45:54.837739 | orchestrator | 2026-02-02 00:45:54.837750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:54.837761 | orchestrator | Monday 02 February 2026 00:45:49 +0000 (0:00:00.244) 0:00:53.564 ******* 2026-02-02 00:45:54.837772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-02 00:45:54.837783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-02 00:45:54.837794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-02 00:45:54.837805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-02 00:45:54.837825 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-02 00:45:54.837842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-02 00:45:54.837861 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-02 00:45:54.837877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-02 00:45:54.837892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-02 00:45:54.837916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-02 00:45:54.837974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-02 00:45:54.837992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-02 00:45:54.838009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-02 00:45:54.838102 | orchestrator | 2026-02-02 00:45:54.838115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:54.838126 | orchestrator | Monday 02 February 2026 00:45:50 +0000 (0:00:00.433) 0:00:53.997 ******* 2026-02-02 00:45:54.838137 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:45:54.838148 | orchestrator | 2026-02-02 00:45:54.838160 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:54.838171 | orchestrator | Monday 02 February 2026 00:45:50 +0000 (0:00:00.222) 0:00:54.220 ******* 2026-02-02 00:45:54.838182 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:45:54.838193 | orchestrator | 2026-02-02 00:45:54.838204 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:54.838236 | orchestrator | Monday 02 February 2026 00:45:50 +0000 (0:00:00.245) 0:00:54.466 ******* 2026-02-02 00:45:54.838247 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:45:54.838259 | orchestrator | 2026-02-02 00:45:54.838270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:54.838281 | orchestrator | Monday 02 February 2026 00:45:50 +0000 (0:00:00.195) 0:00:54.662 ******* 2026-02-02 00:45:54.838292 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:45:54.838303 | orchestrator | 2026-02-02 00:45:54.838314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:54.838325 | orchestrator | Monday 02 February 2026 00:45:50 +0000 (0:00:00.221) 0:00:54.883 ******* 2026-02-02 00:45:54.838336 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:45:54.838347 | orchestrator | 2026-02-02 00:45:54.838358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:54.838369 | orchestrator | Monday 02 February 2026 00:45:51 +0000 (0:00:00.208) 0:00:55.092 ******* 2026-02-02 00:45:54.838380 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:45:54.838390 | orchestrator | 2026-02-02 00:45:54.838401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:54.838412 | orchestrator | Monday 02 February 2026 00:45:51 +0000 (0:00:00.668) 0:00:55.760 ******* 2026-02-02 00:45:54.838423 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:45:54.838434 | orchestrator | 2026-02-02 00:45:54.838445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:54.838456 | orchestrator | Monday 02 February 2026 00:45:52 +0000 (0:00:00.216) 0:00:55.977 ******* 2026-02-02 00:45:54.838467 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:45:54.838478 | orchestrator | 2026-02-02 00:45:54.838489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:54.838500 | orchestrator | Monday 02 February 2026 00:45:52 +0000 (0:00:00.224) 0:00:56.201 ******* 2026-02-02 00:45:54.838511 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db) 2026-02-02 00:45:54.838530 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db) 2026-02-02 00:45:54.838553 | orchestrator | 2026-02-02 00:45:54.838564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:54.838575 | orchestrator | Monday 02 February 2026 00:45:52 +0000 (0:00:00.462) 0:00:56.664 ******* 2026-02-02 00:45:54.838585 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_09bfd8bd-6f2f-4d2c-8da9-081114b71f81) 2026-02-02 00:45:54.838596 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_09bfd8bd-6f2f-4d2c-8da9-081114b71f81) 2026-02-02 00:45:54.838607 | orchestrator | 2026-02-02 00:45:54.838618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:54.838629 | orchestrator | Monday 02 February 2026 00:45:53 +0000 (0:00:00.449) 0:00:57.114 ******* 2026-02-02 00:45:54.838640 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_16c98517-e0bb-4d3e-881d-5a0c6479c324) 2026-02-02 00:45:54.838651 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_16c98517-e0bb-4d3e-881d-5a0c6479c324) 2026-02-02 00:45:54.838662 | orchestrator | 2026-02-02 00:45:54.838673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:54.838684 | orchestrator | Monday 02 February 2026 00:45:53 +0000 (0:00:00.472) 0:00:57.586 ******* 2026-02-02 00:45:54.838695 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b6ca68c3-b66c-4649-954f-01a9ba336075) 2026-02-02 00:45:54.838706 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b6ca68c3-b66c-4649-954f-01a9ba336075) 2026-02-02 00:45:54.838717 | orchestrator | 2026-02-02 00:45:54.838727 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 00:45:54.838738 | orchestrator | Monday 02 February 2026 00:45:54 +0000 (0:00:00.467) 0:00:58.055 ******* 2026-02-02 00:45:54.838749 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-02 00:45:54.838760 | orchestrator | 2026-02-02 00:45:54.838771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:45:54.838782 | orchestrator | Monday 02 February 2026 00:45:54 +0000 (0:00:00.361) 0:00:58.416 ******* 2026-02-02 00:45:54.838793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-02 00:45:54.838804 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-02 00:45:54.838814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-02 00:45:54.838825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-02 00:45:54.838836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-02 00:45:54.838847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-02 00:45:54.838858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-02 00:45:54.838869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-02 00:45:54.838880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-02 00:45:54.838890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-02 00:45:54.838902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-02 00:45:54.838919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-02 00:46:03.765761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-02 00:46:03.765820 | orchestrator | 2026-02-02 00:46:03.765827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:46:03.765832 | orchestrator | Monday 02 February 2026 00:45:54 +0000 (0:00:00.453) 0:00:58.869 ******* 2026-02-02 00:46:03.765850 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.765856 | orchestrator | 2026-02-02 00:46:03.765861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:46:03.765866 | orchestrator | Monday 02 February 2026 00:45:55 +0000 (0:00:00.232) 0:00:59.102 ******* 2026-02-02 00:46:03.765870 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.765875 | orchestrator | 2026-02-02 00:46:03.765880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:46:03.765884 | orchestrator | Monday 02 February 2026 00:45:55 +0000 (0:00:00.775) 0:00:59.877 ******* 2026-02-02 00:46:03.765889 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.765894 | orchestrator | 2026-02-02 00:46:03.765898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:46:03.765903 | orchestrator | Monday 02 February 2026 00:45:56 +0000 (0:00:00.198) 0:01:00.075 ******* 2026-02-02 00:46:03.765908 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.765912 | orchestrator | 2026-02-02 00:46:03.765917 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:46:03.765921 | orchestrator | Monday 02 February 2026 00:45:56 +0000 (0:00:00.201) 0:01:00.277 ******* 2026-02-02 00:46:03.765961 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.765969 | orchestrator | 2026-02-02 00:46:03.765977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:46:03.765985 | orchestrator | Monday 02 February 2026 00:45:56 +0000 (0:00:00.213) 0:01:00.491 ******* 2026-02-02 00:46:03.765993 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766001 | orchestrator | 2026-02-02 00:46:03.766060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:46:03.766069 | orchestrator | Monday 02 February 2026 00:45:56 +0000 (0:00:00.205) 0:01:00.697 ******* 2026-02-02 00:46:03.766074 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766078 | orchestrator | 2026-02-02 00:46:03.766083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:46:03.766087 | orchestrator | Monday 02 February 2026 00:45:56 +0000 (0:00:00.191) 0:01:00.888 ******* 2026-02-02 00:46:03.766092 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766097 | orchestrator | 2026-02-02 00:46:03.766101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:46:03.766106 | orchestrator | Monday 02 February 2026 00:45:57 +0000 (0:00:00.214) 0:01:01.102 ******* 2026-02-02 00:46:03.766110 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-02 00:46:03.766115 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-02 00:46:03.766120 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-02 00:46:03.766125 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-02 00:46:03.766129 | orchestrator | 2026-02-02 00:46:03.766134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:46:03.766139 | orchestrator | Monday 02 February 2026 00:45:57 +0000 (0:00:00.639) 0:01:01.741 ******* 2026-02-02 00:46:03.766144 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766148 | orchestrator | 2026-02-02 00:46:03.766153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:46:03.766157 | orchestrator | Monday 02 February 2026 00:45:57 +0000 (0:00:00.190) 0:01:01.931 ******* 2026-02-02 00:46:03.766162 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766166 | orchestrator | 2026-02-02 00:46:03.766171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:46:03.766176 | orchestrator | Monday 02 February 2026 00:45:58 +0000 (0:00:00.192) 0:01:02.124 ******* 2026-02-02 00:46:03.766180 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766185 | orchestrator | 2026-02-02 00:46:03.766189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 00:46:03.766194 | orchestrator | Monday 02 February 2026 00:45:58 +0000 (0:00:00.186) 0:01:02.310 ******* 2026-02-02 00:46:03.766203 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766208 | orchestrator | 2026-02-02 00:46:03.766212 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-02 00:46:03.766217 | orchestrator | Monday 02 February 2026 00:45:58 +0000 (0:00:00.222) 0:01:02.533 ******* 2026-02-02 00:46:03.766221 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766226 | orchestrator | 2026-02-02 00:46:03.766231 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-02 00:46:03.766235 | orchestrator | Monday 02 February 2026 00:45:58 +0000 (0:00:00.320) 0:01:02.854 ******* 2026-02-02 00:46:03.766240 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'}}) 2026-02-02 00:46:03.766245 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0f572543-3461-541d-9614-18cfec52b251'}}) 2026-02-02 00:46:03.766249 | orchestrator | 2026-02-02 00:46:03.766254 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-02 00:46:03.766258 | orchestrator | Monday 02 February 2026 00:45:59 +0000 (0:00:00.176) 0:01:03.030 ******* 2026-02-02 00:46:03.766264 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'}) 2026-02-02 00:46:03.766270 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'}) 2026-02-02 00:46:03.766274 | orchestrator | 2026-02-02 00:46:03.766281 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-02 00:46:03.766300 | orchestrator | Monday 02 February 2026 00:46:00 +0000 (0:00:01.913) 0:01:04.943 ******* 2026-02-02 00:46:03.766309 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:03.766318 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:03.766326 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766334 | orchestrator | 2026-02-02 00:46:03.766341 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-02 00:46:03.766350 | orchestrator | Monday 02 February 2026 00:46:01 +0000 (0:00:00.153) 0:01:05.097 ******* 2026-02-02 00:46:03.766356 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'}) 2026-02-02 00:46:03.766361 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'}) 2026-02-02 00:46:03.766366 | orchestrator | 2026-02-02 00:46:03.766371 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-02 00:46:03.766376 | orchestrator | Monday 02 February 2026 00:46:02 +0000 (0:00:01.285) 0:01:06.382 ******* 2026-02-02 00:46:03.766382 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:03.766387 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:03.766393 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766398 | orchestrator | 2026-02-02 00:46:03.766403 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-02 00:46:03.766409 | orchestrator | Monday 02 February 2026 00:46:02 +0000 (0:00:00.140) 0:01:06.523 ******* 2026-02-02 00:46:03.766414 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766419 | orchestrator | 2026-02-02 00:46:03.766424 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-02 00:46:03.766429 | orchestrator | Monday 02 February 2026 00:46:02 +0000 (0:00:00.136) 0:01:06.659 ******* 2026-02-02 00:46:03.766438 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:03.766444 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:03.766449 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766454 | orchestrator | 2026-02-02 00:46:03.766459 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-02 00:46:03.766465 | orchestrator | Monday 02 February 2026 00:46:02 +0000 (0:00:00.157) 0:01:06.816 ******* 2026-02-02 00:46:03.766470 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766475 | orchestrator | 2026-02-02 00:46:03.766480 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-02 00:46:03.766485 | orchestrator | Monday 02 February 2026 00:46:03 +0000 (0:00:00.133) 0:01:06.950 ******* 2026-02-02 00:46:03.766490 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:03.766496 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:03.766501 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766506 | orchestrator | 2026-02-02 00:46:03.766511 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-02 00:46:03.766521 | orchestrator | Monday 02 February 2026 00:46:03 +0000 (0:00:00.154) 0:01:07.104 ******* 2026-02-02 00:46:03.766527 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766532 | orchestrator | 2026-02-02 00:46:03.766537 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-02 00:46:03.766542 | orchestrator | Monday 02 February 2026 00:46:03 +0000 (0:00:00.149) 0:01:07.254 ******* 2026-02-02 00:46:03.766547 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:03.766553 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:03.766558 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:03.766563 | orchestrator | 2026-02-02 00:46:03.766569 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-02 00:46:03.766574 | orchestrator | Monday 02 February 2026 00:46:03 +0000 (0:00:00.132) 0:01:07.387 ******* 2026-02-02 00:46:03.766579 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:46:03.766585 | orchestrator | 2026-02-02 00:46:03.766590 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-02 00:46:03.766595 | orchestrator | Monday 02 February 2026 00:46:03 +0000 (0:00:00.269) 0:01:07.656 ******* 2026-02-02 00:46:03.766604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:09.407872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:09.407988 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.408007 | orchestrator | 2026-02-02 00:46:09.408020 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-02 00:46:09.408032 | orchestrator | Monday 02 February 2026 00:46:03 +0000 (0:00:00.140) 0:01:07.797 ******* 2026-02-02 00:46:09.408044 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:09.408056 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:09.408087 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.408100 | orchestrator | 2026-02-02 00:46:09.408111 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-02 00:46:09.408123 | orchestrator | Monday 02 February 2026 00:46:03 +0000 (0:00:00.144) 0:01:07.941 ******* 2026-02-02 00:46:09.408134 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:09.408145 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:09.408156 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.408167 | orchestrator | 2026-02-02 00:46:09.408178 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-02 00:46:09.408201 | orchestrator | Monday 02 February 2026 00:46:04 +0000 (0:00:00.153) 0:01:08.094 ******* 2026-02-02 00:46:09.408216 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.408235 | orchestrator | 2026-02-02 00:46:09.408254 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-02 00:46:09.408266 | orchestrator | Monday 02 February 2026 00:46:04 +0000 (0:00:00.132) 0:01:08.227 ******* 2026-02-02 00:46:09.408277 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.408288 | orchestrator | 2026-02-02 00:46:09.408300 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-02 00:46:09.408311 | orchestrator | Monday 02 February 2026 00:46:04 +0000 (0:00:00.147) 0:01:08.375 ******* 2026-02-02 00:46:09.408322 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.408333 | orchestrator | 2026-02-02 00:46:09.408344 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-02 00:46:09.408355 | orchestrator | Monday 02 February 2026 00:46:04 +0000 (0:00:00.119) 0:01:08.495 ******* 2026-02-02 00:46:09.408366 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 00:46:09.408377 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-02 00:46:09.408389 | orchestrator | } 2026-02-02 00:46:09.408401 | orchestrator | 2026-02-02 00:46:09.408412 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-02 00:46:09.408423 | orchestrator | Monday 02 February 2026 00:46:04 +0000 (0:00:00.119) 0:01:08.615 ******* 2026-02-02 00:46:09.408436 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 00:46:09.408449 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-02 00:46:09.408462 | orchestrator | } 2026-02-02 00:46:09.408475 | orchestrator | 2026-02-02 00:46:09.408487 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-02 00:46:09.408501 | orchestrator | Monday 02 February 2026 00:46:04 +0000 (0:00:00.140) 0:01:08.755 ******* 2026-02-02 00:46:09.408513 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 00:46:09.408526 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-02 00:46:09.408539 | orchestrator | } 2026-02-02 00:46:09.408551 | orchestrator | 2026-02-02 00:46:09.408565 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-02 00:46:09.408578 | orchestrator | Monday 02 February 2026 00:46:04 +0000 (0:00:00.135) 0:01:08.891 ******* 2026-02-02 00:46:09.408591 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:46:09.408604 | orchestrator | 2026-02-02 00:46:09.408616 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-02 00:46:09.408629 | orchestrator | Monday 02 February 2026 00:46:05 +0000 (0:00:00.567) 0:01:09.458 ******* 2026-02-02 00:46:09.408642 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:46:09.408655 | orchestrator | 2026-02-02 00:46:09.408676 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-02 00:46:09.408694 | orchestrator | Monday 02 February 2026 00:46:05 +0000 (0:00:00.484) 0:01:09.943 ******* 2026-02-02 00:46:09.408711 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:46:09.408732 | orchestrator | 2026-02-02 00:46:09.408745 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-02 00:46:09.408759 | orchestrator | Monday 02 February 2026 00:46:06 +0000 (0:00:00.673) 0:01:10.617 ******* 2026-02-02 00:46:09.408771 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:46:09.408784 | orchestrator | 2026-02-02 00:46:09.408795 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-02 00:46:09.408807 | orchestrator | Monday 02 February 2026 00:46:06 +0000 (0:00:00.122) 0:01:10.739 ******* 2026-02-02 00:46:09.408818 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.408829 | orchestrator | 2026-02-02 00:46:09.408840 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-02 00:46:09.408851 | orchestrator | Monday 02 February 2026 00:46:06 +0000 (0:00:00.104) 0:01:10.844 ******* 2026-02-02 00:46:09.408862 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.408873 | orchestrator | 2026-02-02 00:46:09.408884 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-02 00:46:09.408895 | orchestrator | Monday 02 February 2026 00:46:07 +0000 (0:00:00.110) 0:01:10.954 ******* 2026-02-02 00:46:09.408915 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 00:46:09.408984 | orchestrator |  "vgs_report": { 2026-02-02 00:46:09.408997 | orchestrator |  "vg": [] 2026-02-02 00:46:09.409026 | orchestrator |  } 2026-02-02 00:46:09.409039 | orchestrator | } 2026-02-02 00:46:09.409050 | orchestrator | 2026-02-02 00:46:09.409061 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-02 00:46:09.409073 | orchestrator | Monday 02 February 2026 00:46:07 +0000 (0:00:00.131) 0:01:11.086 ******* 2026-02-02 00:46:09.409084 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409095 | orchestrator | 2026-02-02 00:46:09.409106 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-02 00:46:09.409117 | orchestrator | Monday 02 February 2026 00:46:07 +0000 (0:00:00.127) 0:01:11.213 ******* 2026-02-02 00:46:09.409128 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409139 | orchestrator | 2026-02-02 00:46:09.409150 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-02 00:46:09.409161 | orchestrator | Monday 02 February 2026 00:46:07 +0000 (0:00:00.120) 0:01:11.334 ******* 2026-02-02 00:46:09.409172 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409183 | orchestrator | 2026-02-02 00:46:09.409194 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-02 00:46:09.409206 | orchestrator | Monday 02 February 2026 00:46:07 +0000 (0:00:00.139) 0:01:11.473 ******* 2026-02-02 00:46:09.409217 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409228 | orchestrator | 2026-02-02 00:46:09.409239 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-02 00:46:09.409250 | orchestrator | Monday 02 February 2026 00:46:07 +0000 (0:00:00.146) 0:01:11.620 ******* 2026-02-02 00:46:09.409261 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409272 | orchestrator | 2026-02-02 00:46:09.409283 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-02 00:46:09.409294 | orchestrator | Monday 02 February 2026 00:46:07 +0000 (0:00:00.121) 0:01:11.741 ******* 2026-02-02 00:46:09.409305 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409316 | orchestrator | 2026-02-02 00:46:09.409327 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-02 00:46:09.409344 | orchestrator | Monday 02 February 2026 00:46:07 +0000 (0:00:00.116) 0:01:11.857 ******* 2026-02-02 00:46:09.409355 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409370 | orchestrator | 2026-02-02 00:46:09.409389 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-02 00:46:09.409408 | orchestrator | Monday 02 February 2026 00:46:08 +0000 (0:00:00.126) 0:01:11.984 ******* 2026-02-02 00:46:09.409427 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409446 | orchestrator | 2026-02-02 00:46:09.409465 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-02 00:46:09.409489 | orchestrator | Monday 02 February 2026 00:46:08 +0000 (0:00:00.277) 0:01:12.261 ******* 2026-02-02 00:46:09.409500 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409511 | orchestrator | 2026-02-02 00:46:09.409522 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-02 00:46:09.409533 | orchestrator | Monday 02 February 2026 00:46:08 +0000 (0:00:00.130) 0:01:12.392 ******* 2026-02-02 00:46:09.409544 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409556 | orchestrator | 2026-02-02 00:46:09.409567 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-02 00:46:09.409578 | orchestrator | Monday 02 February 2026 00:46:08 +0000 (0:00:00.123) 0:01:12.515 ******* 2026-02-02 00:46:09.409589 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409600 | orchestrator | 2026-02-02 00:46:09.409611 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-02 00:46:09.409622 | orchestrator | Monday 02 February 2026 00:46:08 +0000 (0:00:00.116) 0:01:12.632 ******* 2026-02-02 00:46:09.409633 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409644 | orchestrator | 2026-02-02 00:46:09.409655 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-02 00:46:09.409666 | orchestrator | Monday 02 February 2026 00:46:08 +0000 (0:00:00.129) 0:01:12.762 ******* 2026-02-02 00:46:09.409677 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409688 | orchestrator | 2026-02-02 00:46:09.409700 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-02 00:46:09.409710 | orchestrator | Monday 02 February 2026 00:46:08 +0000 (0:00:00.125) 0:01:12.888 ******* 2026-02-02 00:46:09.409721 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409732 | orchestrator | 2026-02-02 00:46:09.409743 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-02 00:46:09.409754 | orchestrator | Monday 02 February 2026 00:46:09 +0000 (0:00:00.129) 0:01:13.018 ******* 2026-02-02 00:46:09.409765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:09.409777 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:09.409788 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409799 | orchestrator | 2026-02-02 00:46:09.409810 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-02 00:46:09.409829 | orchestrator | Monday 02 February 2026 00:46:09 +0000 (0:00:00.153) 0:01:13.172 ******* 2026-02-02 00:46:09.409846 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:09.409858 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:09.409868 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:09.409879 | orchestrator | 2026-02-02 00:46:09.409890 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-02 00:46:09.409902 | orchestrator | Monday 02 February 2026 00:46:09 +0000 (0:00:00.134) 0:01:13.306 ******* 2026-02-02 00:46:09.409920 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:12.264177 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:12.264264 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:12.264277 | orchestrator | 2026-02-02 00:46:12.264288 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-02 00:46:12.264298 | orchestrator | Monday 02 February 2026 00:46:09 +0000 (0:00:00.139) 0:01:13.445 ******* 2026-02-02 00:46:12.264323 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:12.264333 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:12.264342 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:12.264351 | orchestrator | 2026-02-02 00:46:12.264360 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-02 00:46:12.264369 | orchestrator | Monday 02 February 2026 00:46:09 +0000 (0:00:00.146) 0:01:13.592 ******* 2026-02-02 00:46:12.264377 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:12.264397 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:12.264406 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:12.264415 | orchestrator | 2026-02-02 00:46:12.264424 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-02 00:46:12.264432 | orchestrator | Monday 02 February 2026 00:46:09 +0000 (0:00:00.145) 0:01:13.737 ******* 2026-02-02 00:46:12.264441 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:12.264450 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:12.264459 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:12.264468 | orchestrator | 2026-02-02 00:46:12.264477 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-02 00:46:12.264486 | orchestrator | Monday 02 February 2026 00:46:10 +0000 (0:00:00.281) 0:01:14.019 ******* 2026-02-02 00:46:12.264494 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:12.264503 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:12.264512 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:12.264520 | orchestrator | 2026-02-02 00:46:12.264529 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-02 00:46:12.264538 | orchestrator | Monday 02 February 2026 00:46:10 +0000 (0:00:00.165) 0:01:14.185 ******* 2026-02-02 00:46:12.264546 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:12.264555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:12.264564 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:12.264573 | orchestrator | 2026-02-02 00:46:12.264581 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-02 00:46:12.264590 | orchestrator | Monday 02 February 2026 00:46:10 +0000 (0:00:00.134) 0:01:14.319 ******* 2026-02-02 00:46:12.264599 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:46:12.264608 | orchestrator | 2026-02-02 00:46:12.264618 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-02 00:46:12.264633 | orchestrator | Monday 02 February 2026 00:46:10 +0000 (0:00:00.511) 0:01:14.830 ******* 2026-02-02 00:46:12.264650 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:46:12.264666 | orchestrator | 2026-02-02 00:46:12.264682 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-02 00:46:12.264706 | orchestrator | Monday 02 February 2026 00:46:11 +0000 (0:00:00.492) 0:01:15.323 ******* 2026-02-02 00:46:12.264722 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:46:12.264737 | orchestrator | 2026-02-02 00:46:12.264751 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-02 00:46:12.264769 | orchestrator | Monday 02 February 2026 00:46:11 +0000 (0:00:00.139) 0:01:15.462 ******* 2026-02-02 00:46:12.264786 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'vg_name': 'ceph-0f572543-3461-541d-9614-18cfec52b251'}) 2026-02-02 00:46:12.264802 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'vg_name': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'}) 2026-02-02 00:46:12.264817 | orchestrator | 2026-02-02 00:46:12.264828 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-02 00:46:12.264838 | orchestrator | Monday 02 February 2026 00:46:11 +0000 (0:00:00.160) 0:01:15.623 ******* 2026-02-02 00:46:12.264864 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:12.264875 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:12.264885 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:12.264895 | orchestrator | 2026-02-02 00:46:12.264906 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-02 00:46:12.264916 | orchestrator | Monday 02 February 2026 00:46:11 +0000 (0:00:00.143) 0:01:15.767 ******* 2026-02-02 00:46:12.264952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:12.264969 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:12.264979 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:12.264989 | orchestrator | 2026-02-02 00:46:12.264999 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-02 00:46:12.265009 | orchestrator | Monday 02 February 2026 00:46:11 +0000 (0:00:00.140) 0:01:15.907 ******* 2026-02-02 00:46:12.265019 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'})  2026-02-02 00:46:12.265029 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'})  2026-02-02 00:46:12.265039 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:12.265052 | orchestrator | 2026-02-02 00:46:12.265067 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-02 00:46:12.265077 | orchestrator | Monday 02 February 2026 00:46:12 +0000 (0:00:00.155) 0:01:16.062 ******* 2026-02-02 00:46:12.265088 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 00:46:12.265098 | orchestrator |  "lvm_report": { 2026-02-02 00:46:12.265108 | orchestrator |  "lv": [ 2026-02-02 00:46:12.265118 | orchestrator |  { 2026-02-02 00:46:12.265127 | orchestrator |  "lv_name": "osd-block-0f572543-3461-541d-9614-18cfec52b251", 2026-02-02 00:46:12.265136 | orchestrator |  "vg_name": "ceph-0f572543-3461-541d-9614-18cfec52b251" 2026-02-02 00:46:12.265145 | orchestrator |  }, 2026-02-02 00:46:12.265154 | orchestrator |  { 2026-02-02 00:46:12.265162 | orchestrator |  "lv_name": "osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7", 2026-02-02 00:46:12.265171 | orchestrator |  "vg_name": "ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7" 2026-02-02 00:46:12.265180 | orchestrator |  } 2026-02-02 00:46:12.265188 | orchestrator |  ], 2026-02-02 00:46:12.265197 | orchestrator |  "pv": [ 2026-02-02 00:46:12.265212 | orchestrator |  { 2026-02-02 00:46:12.265221 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-02 00:46:12.265230 | orchestrator |  "vg_name": "ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7" 2026-02-02 00:46:12.265239 | orchestrator |  }, 2026-02-02 00:46:12.265247 | orchestrator |  { 2026-02-02 00:46:12.265256 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-02 00:46:12.265265 | orchestrator |  "vg_name": "ceph-0f572543-3461-541d-9614-18cfec52b251" 2026-02-02 00:46:12.265273 | orchestrator |  } 2026-02-02 00:46:12.265282 | orchestrator |  ] 2026-02-02 00:46:12.265290 | orchestrator |  } 2026-02-02 00:46:12.265299 | orchestrator | } 2026-02-02 00:46:12.265308 | orchestrator | 2026-02-02 00:46:12.265317 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:46:12.265326 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-02 00:46:12.265335 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-02 00:46:12.265344 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-02 00:46:12.265352 | orchestrator | 2026-02-02 00:46:12.265361 | orchestrator | 2026-02-02 00:46:12.265370 | orchestrator | 2026-02-02 00:46:12.265378 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:46:12.265387 | orchestrator | Monday 02 February 2026 00:46:12 +0000 (0:00:00.140) 0:01:16.203 ******* 2026-02-02 00:46:12.265396 | orchestrator | =============================================================================== 2026-02-02 00:46:12.265405 | orchestrator | Create block VGs -------------------------------------------------------- 5.78s 2026-02-02 00:46:12.265413 | orchestrator | Create block LVs -------------------------------------------------------- 4.12s 2026-02-02 00:46:12.265422 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.99s 2026-02-02 00:46:12.265431 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.83s 2026-02-02 00:46:12.265446 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.66s 2026-02-02 00:46:12.265455 | orchestrator | Add known partitions to the list of available block devices ------------- 1.61s 2026-02-02 00:46:12.265464 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.60s 2026-02-02 00:46:12.265473 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.57s 2026-02-02 00:46:12.265488 | orchestrator | Add known links to the list of available block devices ------------------ 1.35s 2026-02-02 00:46:12.564184 | orchestrator | Add known partitions to the list of available block devices ------------- 1.14s 2026-02-02 00:46:12.564238 | orchestrator | Add known partitions to the list of available block devices ------------- 1.03s 2026-02-02 00:46:12.564244 | orchestrator | Add known links to the list of available block devices ------------------ 1.02s 2026-02-02 00:46:12.564249 | orchestrator | Print LVM report data --------------------------------------------------- 0.96s 2026-02-02 00:46:12.564254 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.94s 2026-02-02 00:46:12.564259 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.84s 2026-02-02 00:46:12.564264 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-02-02 00:46:12.564268 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-02-02 00:46:12.564273 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.76s 2026-02-02 00:46:12.564278 | orchestrator | Print number of OSDs wanted per DB+WAL VG ------------------------------- 0.72s 2026-02-02 00:46:12.564282 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-02-02 00:46:25.247673 | orchestrator | 2026-02-02 00:46:25 | INFO  | Prepare task for execution of facts. 2026-02-02 00:46:25.322792 | orchestrator | 2026-02-02 00:46:25 | INFO  | Task b759707e-dc85-417e-9c7e-2beb15cf62ea (facts) was prepared for execution. 2026-02-02 00:46:25.322917 | orchestrator | 2026-02-02 00:46:25 | INFO  | It takes a moment until task b759707e-dc85-417e-9c7e-2beb15cf62ea (facts) has been started and output is visible here. 2026-02-02 00:46:38.350319 | orchestrator | 2026-02-02 00:46:38.350420 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-02 00:46:38.350433 | orchestrator | 2026-02-02 00:46:38.350442 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-02 00:46:38.350451 | orchestrator | Monday 02 February 2026 00:46:29 +0000 (0:00:00.271) 0:00:00.271 ******* 2026-02-02 00:46:38.350459 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:46:38.350469 | orchestrator | ok: [testbed-manager] 2026-02-02 00:46:38.350477 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:46:38.350486 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:46:38.350494 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:46:38.350502 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:46:38.350510 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:46:38.350518 | orchestrator | 2026-02-02 00:46:38.350527 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-02 00:46:38.350535 | orchestrator | Monday 02 February 2026 00:46:31 +0000 (0:00:01.181) 0:00:01.452 ******* 2026-02-02 00:46:38.350544 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:46:38.350552 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:46:38.350561 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:46:38.350569 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:46:38.350577 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:46:38.350585 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:46:38.350593 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:38.350601 | orchestrator | 2026-02-02 00:46:38.350610 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-02 00:46:38.350618 | orchestrator | 2026-02-02 00:46:38.350626 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-02 00:46:38.350634 | orchestrator | Monday 02 February 2026 00:46:32 +0000 (0:00:01.428) 0:00:02.881 ******* 2026-02-02 00:46:38.350643 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:46:38.350651 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:46:38.350659 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:46:38.350667 | orchestrator | ok: [testbed-manager] 2026-02-02 00:46:38.350675 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:46:38.350683 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:46:38.350692 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:46:38.350700 | orchestrator | 2026-02-02 00:46:38.350708 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-02 00:46:38.350716 | orchestrator | 2026-02-02 00:46:38.350724 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-02 00:46:38.350733 | orchestrator | Monday 02 February 2026 00:46:37 +0000 (0:00:04.786) 0:00:07.668 ******* 2026-02-02 00:46:38.350741 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:46:38.350749 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:46:38.350757 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:46:38.350765 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:46:38.350775 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:46:38.350790 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:46:38.350805 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:46:38.350818 | orchestrator | 2026-02-02 00:46:38.350832 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:46:38.350846 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:46:38.350860 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:46:38.350902 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:46:38.350917 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:46:38.350957 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:46:38.350970 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:46:38.350983 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:46:38.350995 | orchestrator | 2026-02-02 00:46:38.351007 | orchestrator | 2026-02-02 00:46:38.351019 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:46:38.351033 | orchestrator | Monday 02 February 2026 00:46:37 +0000 (0:00:00.551) 0:00:08.219 ******* 2026-02-02 00:46:38.351047 | orchestrator | =============================================================================== 2026-02-02 00:46:38.351060 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.79s 2026-02-02 00:46:38.351072 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.43s 2026-02-02 00:46:38.351084 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.18s 2026-02-02 00:46:38.351098 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-02-02 00:46:50.892165 | orchestrator | 2026-02-02 00:46:50 | INFO  | Prepare task for execution of frr. 2026-02-02 00:46:50.983495 | orchestrator | 2026-02-02 00:46:50 | INFO  | Task c8ad9bdc-c885-4e1b-bda0-68fb6e38d056 (frr) was prepared for execution. 2026-02-02 00:46:50.983571 | orchestrator | 2026-02-02 00:46:50 | INFO  | It takes a moment until task c8ad9bdc-c885-4e1b-bda0-68fb6e38d056 (frr) has been started and output is visible here. 2026-02-02 00:47:17.673380 | orchestrator | 2026-02-02 00:47:17.673513 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-02 00:47:17.673533 | orchestrator | 2026-02-02 00:47:17.673545 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-02 00:47:17.673558 | orchestrator | Monday 02 February 2026 00:46:55 +0000 (0:00:00.241) 0:00:00.242 ******* 2026-02-02 00:47:17.673569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 00:47:17.673583 | orchestrator | 2026-02-02 00:47:17.673594 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-02 00:47:17.673605 | orchestrator | Monday 02 February 2026 00:46:55 +0000 (0:00:00.229) 0:00:00.472 ******* 2026-02-02 00:47:17.673621 | orchestrator | changed: [testbed-manager] 2026-02-02 00:47:17.673641 | orchestrator | 2026-02-02 00:47:17.673660 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-02 00:47:17.673682 | orchestrator | Monday 02 February 2026 00:46:56 +0000 (0:00:01.212) 0:00:01.684 ******* 2026-02-02 00:47:17.673701 | orchestrator | changed: [testbed-manager] 2026-02-02 00:47:17.673719 | orchestrator | 2026-02-02 00:47:17.673733 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-02 00:47:17.673744 | orchestrator | Monday 02 February 2026 00:47:07 +0000 (0:00:10.350) 0:00:12.035 ******* 2026-02-02 00:47:17.673755 | orchestrator | ok: [testbed-manager] 2026-02-02 00:47:17.673766 | orchestrator | 2026-02-02 00:47:17.673778 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-02 00:47:17.673789 | orchestrator | Monday 02 February 2026 00:47:08 +0000 (0:00:01.048) 0:00:13.084 ******* 2026-02-02 00:47:17.673799 | orchestrator | changed: [testbed-manager] 2026-02-02 00:47:17.673835 | orchestrator | 2026-02-02 00:47:17.673847 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-02 00:47:17.673858 | orchestrator | Monday 02 February 2026 00:47:09 +0000 (0:00:01.004) 0:00:14.089 ******* 2026-02-02 00:47:17.673870 | orchestrator | ok: [testbed-manager] 2026-02-02 00:47:17.673881 | orchestrator | 2026-02-02 00:47:17.673892 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-02 00:47:17.673904 | orchestrator | Monday 02 February 2026 00:47:10 +0000 (0:00:01.248) 0:00:15.338 ******* 2026-02-02 00:47:17.673961 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:47:17.673976 | orchestrator | 2026-02-02 00:47:17.673989 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-02 00:47:17.674002 | orchestrator | Monday 02 February 2026 00:47:10 +0000 (0:00:00.133) 0:00:15.471 ******* 2026-02-02 00:47:17.674105 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:47:17.674123 | orchestrator | 2026-02-02 00:47:17.674135 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-02 00:47:17.674148 | orchestrator | Monday 02 February 2026 00:47:10 +0000 (0:00:00.173) 0:00:15.645 ******* 2026-02-02 00:47:17.674160 | orchestrator | changed: [testbed-manager] 2026-02-02 00:47:17.674172 | orchestrator | 2026-02-02 00:47:17.674185 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-02 00:47:17.674198 | orchestrator | Monday 02 February 2026 00:47:11 +0000 (0:00:01.029) 0:00:16.674 ******* 2026-02-02 00:47:17.674211 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-02 00:47:17.674223 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-02 00:47:17.674235 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-02 00:47:17.674246 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-02 00:47:17.674258 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-02 00:47:17.674269 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-02 00:47:17.674280 | orchestrator | 2026-02-02 00:47:17.674291 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-02 00:47:17.674302 | orchestrator | Monday 02 February 2026 00:47:14 +0000 (0:00:02.373) 0:00:19.047 ******* 2026-02-02 00:47:17.674313 | orchestrator | ok: [testbed-manager] 2026-02-02 00:47:17.674324 | orchestrator | 2026-02-02 00:47:17.674334 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-02 00:47:17.674345 | orchestrator | Monday 02 February 2026 00:47:15 +0000 (0:00:01.749) 0:00:20.797 ******* 2026-02-02 00:47:17.674356 | orchestrator | changed: [testbed-manager] 2026-02-02 00:47:17.674367 | orchestrator | 2026-02-02 00:47:17.674378 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:47:17.674390 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 00:47:17.674401 | orchestrator | 2026-02-02 00:47:17.674412 | orchestrator | 2026-02-02 00:47:17.674423 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:47:17.674434 | orchestrator | Monday 02 February 2026 00:47:17 +0000 (0:00:01.359) 0:00:22.156 ******* 2026-02-02 00:47:17.674445 | orchestrator | =============================================================================== 2026-02-02 00:47:17.674456 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.35s 2026-02-02 00:47:17.674467 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.37s 2026-02-02 00:47:17.674478 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.75s 2026-02-02 00:47:17.674489 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.36s 2026-02-02 00:47:17.674511 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.25s 2026-02-02 00:47:17.674550 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.21s 2026-02-02 00:47:17.674562 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.05s 2026-02-02 00:47:17.674573 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.03s 2026-02-02 00:47:17.674584 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.01s 2026-02-02 00:47:17.674595 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-02-02 00:47:17.674606 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.17s 2026-02-02 00:47:17.674617 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-02-02 00:47:18.015625 | orchestrator | 2026-02-02 00:47:18.016825 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Feb 2 00:47:18 UTC 2026 2026-02-02 00:47:18.016881 | orchestrator | 2026-02-02 00:47:20.059539 | orchestrator | 2026-02-02 00:47:20 | INFO  | Collection nutshell is prepared for execution 2026-02-02 00:47:20.059655 | orchestrator | 2026-02-02 00:47:20 | INFO  | A [0] - dotfiles 2026-02-02 00:47:30.091882 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [0] - homer 2026-02-02 00:47:30.092060 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [0] - netdata 2026-02-02 00:47:30.092078 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [0] - openstackclient 2026-02-02 00:47:30.092093 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [0] - phpmyadmin 2026-02-02 00:47:30.092111 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [0] - common 2026-02-02 00:47:30.096027 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [1] -- loadbalancer 2026-02-02 00:47:30.096093 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [2] --- opensearch 2026-02-02 00:47:30.096107 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [2] --- mariadb-ng 2026-02-02 00:47:30.096195 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [3] ---- horizon 2026-02-02 00:47:30.096212 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [3] ---- keystone 2026-02-02 00:47:30.096236 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [4] ----- neutron 2026-02-02 00:47:30.096467 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [5] ------ wait-for-nova 2026-02-02 00:47:30.097135 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [6] ------- octavia 2026-02-02 00:47:30.098539 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [4] ----- barbican 2026-02-02 00:47:30.098717 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [4] ----- designate 2026-02-02 00:47:30.098740 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [4] ----- ironic 2026-02-02 00:47:30.098802 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [4] ----- placement 2026-02-02 00:47:30.098814 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [4] ----- magnum 2026-02-02 00:47:30.098837 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [1] -- openvswitch 2026-02-02 00:47:30.098850 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [2] --- ovn 2026-02-02 00:47:30.099635 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [1] -- memcached 2026-02-02 00:47:30.099719 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [1] -- redis 2026-02-02 00:47:30.099736 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [1] -- rabbitmq-ng 2026-02-02 00:47:30.099748 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [0] - kubernetes 2026-02-02 00:47:30.102393 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [1] -- kubeconfig 2026-02-02 00:47:30.102450 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [1] -- copy-kubeconfig 2026-02-02 00:47:30.102495 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [0] - ceph 2026-02-02 00:47:30.104798 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [1] -- ceph-pools 2026-02-02 00:47:30.104824 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [2] --- copy-ceph-keys 2026-02-02 00:47:30.104837 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [3] ---- cephclient 2026-02-02 00:47:30.104896 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-02-02 00:47:30.105029 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [4] ----- wait-for-keystone 2026-02-02 00:47:30.105042 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [5] ------ kolla-ceph-rgw 2026-02-02 00:47:30.105232 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [5] ------ glance 2026-02-02 00:47:30.105252 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [5] ------ cinder 2026-02-02 00:47:30.105747 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [5] ------ nova 2026-02-02 00:47:30.105814 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [4] ----- prometheus 2026-02-02 00:47:30.106001 | orchestrator | 2026-02-02 00:47:30 | INFO  | A [5] ------ grafana 2026-02-02 00:47:30.346882 | orchestrator | 2026-02-02 00:47:30 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-02-02 00:47:30.347018 | orchestrator | 2026-02-02 00:47:30 | INFO  | Tasks are running in the background 2026-02-02 00:47:33.599718 | orchestrator | 2026-02-02 00:47:33 | INFO  | No task IDs specified, wait for all currently running tasks 2026-02-02 00:47:35.716861 | orchestrator | 2026-02-02 00:47:35 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:47:35.719548 | orchestrator | 2026-02-02 00:47:35 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:47:35.719990 | orchestrator | 2026-02-02 00:47:35 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:47:35.723595 | orchestrator | 2026-02-02 00:47:35 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:47:35.723635 | orchestrator | 2026-02-02 00:47:35 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:47:35.723646 | orchestrator | 2026-02-02 00:47:35 | INFO  | Task 5067286b-ea40-46d9-8b62-2537da520fa9 is in state STARTED 2026-02-02 00:47:35.723657 | orchestrator | 2026-02-02 00:47:35 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:47:35.723668 | orchestrator | 2026-02-02 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:47:38.759699 | orchestrator | 2026-02-02 00:47:38 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:47:38.760213 | orchestrator | 2026-02-02 00:47:38 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:47:38.760670 | orchestrator | 2026-02-02 00:47:38 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:47:38.761312 | orchestrator | 2026-02-02 00:47:38 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:47:38.766270 | orchestrator | 2026-02-02 00:47:38 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:47:38.769225 | orchestrator | 2026-02-02 00:47:38 | INFO  | Task 5067286b-ea40-46d9-8b62-2537da520fa9 is in state STARTED 2026-02-02 00:47:38.769793 | orchestrator | 2026-02-02 00:47:38 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:47:38.769820 | orchestrator | 2026-02-02 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:47:41.852582 | orchestrator | 2026-02-02 00:47:41 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:47:41.854310 | orchestrator | 2026-02-02 00:47:41 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:47:41.854606 | orchestrator | 2026-02-02 00:47:41 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:47:41.856142 | orchestrator | 2026-02-02 00:47:41 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:47:41.857388 | orchestrator | 2026-02-02 00:47:41 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:47:41.858383 | orchestrator | 2026-02-02 00:47:41 | INFO  | Task 5067286b-ea40-46d9-8b62-2537da520fa9 is in state STARTED 2026-02-02 00:47:41.861225 | orchestrator | 2026-02-02 00:47:41 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:47:41.861338 | orchestrator | 2026-02-02 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:47:45.015074 | orchestrator | 2026-02-02 00:47:45 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:47:45.015165 | orchestrator | 2026-02-02 00:47:45 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:47:45.015408 | orchestrator | 2026-02-02 00:47:45 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:47:45.015987 | orchestrator | 2026-02-02 00:47:45 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:47:45.016671 | orchestrator | 2026-02-02 00:47:45 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:47:45.017850 | orchestrator | 2026-02-02 00:47:45 | INFO  | Task 5067286b-ea40-46d9-8b62-2537da520fa9 is in state STARTED 2026-02-02 00:47:45.018449 | orchestrator | 2026-02-02 00:47:45 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:47:45.018482 | orchestrator | 2026-02-02 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:47:48.371124 | orchestrator | 2026-02-02 00:47:48 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:47:48.371214 | orchestrator | 2026-02-02 00:47:48 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:47:48.371229 | orchestrator | 2026-02-02 00:47:48 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:47:48.371241 | orchestrator | 2026-02-02 00:47:48 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:47:48.371252 | orchestrator | 2026-02-02 00:47:48 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:47:48.371263 | orchestrator | 2026-02-02 00:47:48 | INFO  | Task 5067286b-ea40-46d9-8b62-2537da520fa9 is in state STARTED 2026-02-02 00:47:48.371274 | orchestrator | 2026-02-02 00:47:48 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:47:48.371286 | orchestrator | 2026-02-02 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:47:51.637469 | orchestrator | 2026-02-02 00:47:51 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:47:51.637552 | orchestrator | 2026-02-02 00:47:51 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:47:51.761935 | orchestrator | 2026-02-02 00:47:51 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:47:51.761990 | orchestrator | 2026-02-02 00:47:51 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:47:51.762042 | orchestrator | 2026-02-02 00:47:51 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:47:51.762051 | orchestrator | 2026-02-02 00:47:51 | INFO  | Task 5067286b-ea40-46d9-8b62-2537da520fa9 is in state STARTED 2026-02-02 00:47:51.762058 | orchestrator | 2026-02-02 00:47:51 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:47:51.762076 | orchestrator | 2026-02-02 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:47:54.721757 | orchestrator | 2026-02-02 00:47:54 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:47:54.722406 | orchestrator | 2026-02-02 00:47:54 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:47:54.724523 | orchestrator | 2026-02-02 00:47:54 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:47:54.724534 | orchestrator | 2026-02-02 00:47:54 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:47:54.725474 | orchestrator | 2026-02-02 00:47:54 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:47:54.725946 | orchestrator | 2026-02-02 00:47:54 | INFO  | Task 5067286b-ea40-46d9-8b62-2537da520fa9 is in state STARTED 2026-02-02 00:47:54.731073 | orchestrator | 2026-02-02 00:47:54 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:47:54.731101 | orchestrator | 2026-02-02 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:47:57.771691 | orchestrator | 2026-02-02 00:47:57 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:47:57.771827 | orchestrator | 2026-02-02 00:47:57 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:47:57.772588 | orchestrator | 2026-02-02 00:47:57 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:47:57.774168 | orchestrator | 2026-02-02 00:47:57 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:47:57.774235 | orchestrator | 2026-02-02 00:47:57 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:47:57.775195 | orchestrator | 2026-02-02 00:47:57 | INFO  | Task 5067286b-ea40-46d9-8b62-2537da520fa9 is in state STARTED 2026-02-02 00:47:57.776163 | orchestrator | 2026-02-02 00:47:57 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:47:57.776192 | orchestrator | 2026-02-02 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:00.858560 | orchestrator | 2026-02-02 00:48:00 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:00.860027 | orchestrator | 2026-02-02 00:48:00 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:00.861176 | orchestrator | 2026-02-02 00:48:00 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:00.863816 | orchestrator | 2026-02-02 00:48:00 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:48:00.865693 | orchestrator | 2026-02-02 00:48:00 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:00.866877 | orchestrator | 2026-02-02 00:48:00 | INFO  | Task 5067286b-ea40-46d9-8b62-2537da520fa9 is in state SUCCESS 2026-02-02 00:48:00.867061 | orchestrator | 2026-02-02 00:48:00.867076 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-02-02 00:48:00.867096 | orchestrator | 2026-02-02 00:48:00.867105 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-02-02 00:48:00.867127 | orchestrator | Monday 02 February 2026 00:47:44 +0000 (0:00:00.363) 0:00:00.363 ******* 2026-02-02 00:48:00.867134 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:48:00.867140 | orchestrator | changed: [testbed-manager] 2026-02-02 00:48:00.867146 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:48:00.867151 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:48:00.867156 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:48:00.867162 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:48:00.867167 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:48:00.867173 | orchestrator | 2026-02-02 00:48:00.867178 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-02-02 00:48:00.867184 | orchestrator | Monday 02 February 2026 00:47:48 +0000 (0:00:04.259) 0:00:04.623 ******* 2026-02-02 00:48:00.867190 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-02 00:48:00.867196 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-02 00:48:00.867201 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-02 00:48:00.867207 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-02 00:48:00.867212 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-02 00:48:00.867218 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-02 00:48:00.867223 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-02 00:48:00.867229 | orchestrator | 2026-02-02 00:48:00.867234 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-02-02 00:48:00.867240 | orchestrator | Monday 02 February 2026 00:47:51 +0000 (0:00:02.505) 0:00:07.128 ******* 2026-02-02 00:48:00.867248 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-02 00:47:49.402400', 'end': '2026-02-02 00:47:49.407106', 'delta': '0:00:00.004706', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-02 00:48:00.867259 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-02 00:47:49.270641', 'end': '2026-02-02 00:47:49.277952', 'delta': '0:00:00.007311', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-02 00:48:00.867266 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-02 00:47:50.541114', 'end': '2026-02-02 00:47:50.545760', 'delta': '0:00:00.004646', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-02 00:48:00.867298 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-02 00:47:49.964924', 'end': '2026-02-02 00:47:49.968867', 'delta': '0:00:00.003943', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-02 00:48:00.867305 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-02 00:47:50.687090', 'end': '2026-02-02 00:47:50.691201', 'delta': '0:00:00.004111', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-02 00:48:00.867311 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-02 00:47:50.837408', 'end': '2026-02-02 00:47:50.840973', 'delta': '0:00:00.003565', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-02 00:48:00.867316 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-02 00:47:49.281040', 'end': '2026-02-02 00:47:49.285691', 'delta': '0:00:00.004651', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-02 00:48:00.867322 | orchestrator | 2026-02-02 00:48:00.867328 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-02-02 00:48:00.867334 | orchestrator | Monday 02 February 2026 00:47:53 +0000 (0:00:02.725) 0:00:09.854 ******* 2026-02-02 00:48:00.867339 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-02 00:48:00.867345 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-02 00:48:00.867350 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-02 00:48:00.867356 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-02 00:48:00.867365 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-02 00:48:00.867371 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-02 00:48:00.867376 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-02 00:48:00.867382 | orchestrator | 2026-02-02 00:48:00.867387 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-02-02 00:48:00.867393 | orchestrator | Monday 02 February 2026 00:47:55 +0000 (0:00:01.311) 0:00:11.165 ******* 2026-02-02 00:48:00.867399 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-02-02 00:48:00.867404 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-02-02 00:48:00.867410 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-02-02 00:48:00.867415 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-02-02 00:48:00.867421 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-02-02 00:48:00.867426 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-02-02 00:48:00.867432 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-02-02 00:48:00.867437 | orchestrator | 2026-02-02 00:48:00.867445 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:48:00.867454 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:48:00.867461 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:48:00.867467 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:48:00.867472 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:48:00.867478 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:48:00.867483 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:48:00.867489 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:48:00.867494 | orchestrator | 2026-02-02 00:48:00.867500 | orchestrator | 2026-02-02 00:48:00.867506 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:48:00.867511 | orchestrator | Monday 02 February 2026 00:47:58 +0000 (0:00:03.543) 0:00:14.709 ******* 2026-02-02 00:48:00.867517 | orchestrator | =============================================================================== 2026-02-02 00:48:00.867522 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.26s 2026-02-02 00:48:00.867530 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.54s 2026-02-02 00:48:00.867539 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.73s 2026-02-02 00:48:00.867547 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.51s 2026-02-02 00:48:00.867557 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.31s 2026-02-02 00:48:00.867732 | orchestrator | 2026-02-02 00:48:00 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:48:00.868346 | orchestrator | 2026-02-02 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:03.954187 | orchestrator | 2026-02-02 00:48:03 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:03.954277 | orchestrator | 2026-02-02 00:48:03 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:03.954319 | orchestrator | 2026-02-02 00:48:03 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:03.954359 | orchestrator | 2026-02-02 00:48:03 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:03.954371 | orchestrator | 2026-02-02 00:48:03 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:48:03.954382 | orchestrator | 2026-02-02 00:48:03 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:03.954394 | orchestrator | 2026-02-02 00:48:03 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:48:03.954405 | orchestrator | 2026-02-02 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:07.015646 | orchestrator | 2026-02-02 00:48:07 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:07.015735 | orchestrator | 2026-02-02 00:48:07 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:07.015750 | orchestrator | 2026-02-02 00:48:07 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:07.015762 | orchestrator | 2026-02-02 00:48:07 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:07.015773 | orchestrator | 2026-02-02 00:48:07 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:48:07.015785 | orchestrator | 2026-02-02 00:48:07 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:07.015797 | orchestrator | 2026-02-02 00:48:07 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:48:07.015808 | orchestrator | 2026-02-02 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:10.062132 | orchestrator | 2026-02-02 00:48:10 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:10.064712 | orchestrator | 2026-02-02 00:48:10 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:10.065666 | orchestrator | 2026-02-02 00:48:10 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:10.066543 | orchestrator | 2026-02-02 00:48:10 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:10.067038 | orchestrator | 2026-02-02 00:48:10 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:48:10.070888 | orchestrator | 2026-02-02 00:48:10 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:10.071686 | orchestrator | 2026-02-02 00:48:10 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:48:10.071732 | orchestrator | 2026-02-02 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:13.122091 | orchestrator | 2026-02-02 00:48:13 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:13.122532 | orchestrator | 2026-02-02 00:48:13 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:13.124239 | orchestrator | 2026-02-02 00:48:13 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:13.127506 | orchestrator | 2026-02-02 00:48:13 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:13.129187 | orchestrator | 2026-02-02 00:48:13 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:48:13.129499 | orchestrator | 2026-02-02 00:48:13 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:13.131851 | orchestrator | 2026-02-02 00:48:13 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:48:13.132752 | orchestrator | 2026-02-02 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:16.218252 | orchestrator | 2026-02-02 00:48:16 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:16.218406 | orchestrator | 2026-02-02 00:48:16 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:16.218422 | orchestrator | 2026-02-02 00:48:16 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:16.218434 | orchestrator | 2026-02-02 00:48:16 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:16.218450 | orchestrator | 2026-02-02 00:48:16 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:48:16.218466 | orchestrator | 2026-02-02 00:48:16 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:16.218483 | orchestrator | 2026-02-02 00:48:16 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:48:16.218499 | orchestrator | 2026-02-02 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:19.305840 | orchestrator | 2026-02-02 00:48:19 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:19.306101 | orchestrator | 2026-02-02 00:48:19 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:19.306127 | orchestrator | 2026-02-02 00:48:19 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:19.306138 | orchestrator | 2026-02-02 00:48:19 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:19.306150 | orchestrator | 2026-02-02 00:48:19 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:48:19.306161 | orchestrator | 2026-02-02 00:48:19 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:19.306172 | orchestrator | 2026-02-02 00:48:19 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:48:19.306183 | orchestrator | 2026-02-02 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:22.530831 | orchestrator | 2026-02-02 00:48:22 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:22.530919 | orchestrator | 2026-02-02 00:48:22 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:22.530929 | orchestrator | 2026-02-02 00:48:22 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:22.530935 | orchestrator | 2026-02-02 00:48:22 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:22.530940 | orchestrator | 2026-02-02 00:48:22 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:48:22.530946 | orchestrator | 2026-02-02 00:48:22 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:22.530952 | orchestrator | 2026-02-02 00:48:22 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:48:22.530958 | orchestrator | 2026-02-02 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:25.408296 | orchestrator | 2026-02-02 00:48:25 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:25.413396 | orchestrator | 2026-02-02 00:48:25 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:25.421156 | orchestrator | 2026-02-02 00:48:25 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:25.421262 | orchestrator | 2026-02-02 00:48:25 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:25.428110 | orchestrator | 2026-02-02 00:48:25 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state STARTED 2026-02-02 00:48:25.437115 | orchestrator | 2026-02-02 00:48:25 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:25.448806 | orchestrator | 2026-02-02 00:48:25 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state STARTED 2026-02-02 00:48:25.448881 | orchestrator | 2026-02-02 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:28.490766 | orchestrator | 2026-02-02 00:48:28 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:28.490965 | orchestrator | 2026-02-02 00:48:28 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:28.492029 | orchestrator | 2026-02-02 00:48:28 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:28.493046 | orchestrator | 2026-02-02 00:48:28 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:28.495158 | orchestrator | 2026-02-02 00:48:28 | INFO  | Task 5d2754f0-e025-4212-bcac-3e4dff06f2f3 is in state SUCCESS 2026-02-02 00:48:28.496518 | orchestrator | 2026-02-02 00:48:28 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:28.497376 | orchestrator | 2026-02-02 00:48:28 | INFO  | Task 37522eb1-7b34-4532-b14b-efa68bf4f248 is in state SUCCESS 2026-02-02 00:48:28.497801 | orchestrator | 2026-02-02 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:31.542460 | orchestrator | 2026-02-02 00:48:31 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:31.543977 | orchestrator | 2026-02-02 00:48:31 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:31.546181 | orchestrator | 2026-02-02 00:48:31 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:31.549005 | orchestrator | 2026-02-02 00:48:31 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:31.553197 | orchestrator | 2026-02-02 00:48:31 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:31.554742 | orchestrator | 2026-02-02 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:34.648685 | orchestrator | 2026-02-02 00:48:34 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:34.650539 | orchestrator | 2026-02-02 00:48:34 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:34.655834 | orchestrator | 2026-02-02 00:48:34 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:34.657922 | orchestrator | 2026-02-02 00:48:34 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:34.658952 | orchestrator | 2026-02-02 00:48:34 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:34.659079 | orchestrator | 2026-02-02 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:37.697300 | orchestrator | 2026-02-02 00:48:37 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:37.698454 | orchestrator | 2026-02-02 00:48:37 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:37.699225 | orchestrator | 2026-02-02 00:48:37 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:37.700203 | orchestrator | 2026-02-02 00:48:37 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:37.702217 | orchestrator | 2026-02-02 00:48:37 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:37.702573 | orchestrator | 2026-02-02 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:40.754135 | orchestrator | 2026-02-02 00:48:40 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:40.754209 | orchestrator | 2026-02-02 00:48:40 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:40.754215 | orchestrator | 2026-02-02 00:48:40 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:40.754220 | orchestrator | 2026-02-02 00:48:40 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:40.755440 | orchestrator | 2026-02-02 00:48:40 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:40.755481 | orchestrator | 2026-02-02 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:43.864467 | orchestrator | 2026-02-02 00:48:43 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:43.864581 | orchestrator | 2026-02-02 00:48:43 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:43.867741 | orchestrator | 2026-02-02 00:48:43 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:43.868166 | orchestrator | 2026-02-02 00:48:43 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:43.869053 | orchestrator | 2026-02-02 00:48:43 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:43.869105 | orchestrator | 2026-02-02 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:46.928531 | orchestrator | 2026-02-02 00:48:46 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:46.929541 | orchestrator | 2026-02-02 00:48:46 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:46.931665 | orchestrator | 2026-02-02 00:48:46 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:46.933901 | orchestrator | 2026-02-02 00:48:46 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:46.934759 | orchestrator | 2026-02-02 00:48:46 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:46.934980 | orchestrator | 2026-02-02 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:49.997108 | orchestrator | 2026-02-02 00:48:49 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:49.999683 | orchestrator | 2026-02-02 00:48:50 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:49.999747 | orchestrator | 2026-02-02 00:48:50 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:50.007375 | orchestrator | 2026-02-02 00:48:50 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:50.010800 | orchestrator | 2026-02-02 00:48:50 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:50.010841 | orchestrator | 2026-02-02 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:53.108407 | orchestrator | 2026-02-02 00:48:53 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:53.108583 | orchestrator | 2026-02-02 00:48:53 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:53.108609 | orchestrator | 2026-02-02 00:48:53 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:53.108629 | orchestrator | 2026-02-02 00:48:53 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:53.108648 | orchestrator | 2026-02-02 00:48:53 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:53.108668 | orchestrator | 2026-02-02 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:56.159395 | orchestrator | 2026-02-02 00:48:56 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:56.163433 | orchestrator | 2026-02-02 00:48:56 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:56.167727 | orchestrator | 2026-02-02 00:48:56 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:56.168233 | orchestrator | 2026-02-02 00:48:56 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:56.169914 | orchestrator | 2026-02-02 00:48:56 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:56.170540 | orchestrator | 2026-02-02 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:48:59.277737 | orchestrator | 2026-02-02 00:48:59 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:48:59.284775 | orchestrator | 2026-02-02 00:48:59 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:48:59.288502 | orchestrator | 2026-02-02 00:48:59 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:48:59.290301 | orchestrator | 2026-02-02 00:48:59 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:48:59.294043 | orchestrator | 2026-02-02 00:48:59 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:48:59.294431 | orchestrator | 2026-02-02 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:02.356090 | orchestrator | 2026-02-02 00:49:02 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:02.360488 | orchestrator | 2026-02-02 00:49:02 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:49:02.369391 | orchestrator | 2026-02-02 00:49:02 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:49:02.374948 | orchestrator | 2026-02-02 00:49:02 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:02.378735 | orchestrator | 2026-02-02 00:49:02 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:02.381349 | orchestrator | 2026-02-02 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:05.446391 | orchestrator | 2026-02-02 00:49:05 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:05.447194 | orchestrator | 2026-02-02 00:49:05 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:49:05.448491 | orchestrator | 2026-02-02 00:49:05 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:49:05.450199 | orchestrator | 2026-02-02 00:49:05 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:05.451205 | orchestrator | 2026-02-02 00:49:05 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:05.451263 | orchestrator | 2026-02-02 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:08.491074 | orchestrator | 2026-02-02 00:49:08 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:08.491350 | orchestrator | 2026-02-02 00:49:08 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state STARTED 2026-02-02 00:49:08.493963 | orchestrator | 2026-02-02 00:49:08 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:49:08.494567 | orchestrator | 2026-02-02 00:49:08 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:08.495828 | orchestrator | 2026-02-02 00:49:08 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:08.496029 | orchestrator | 2026-02-02 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:11.518770 | orchestrator | 2026-02-02 00:49:11 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:11.519418 | orchestrator | 2026-02-02 00:49:11 | INFO  | Task b38c315d-f7f3-4cd2-ad46-69e5fcfafab2 is in state SUCCESS 2026-02-02 00:49:11.519751 | orchestrator | 2026-02-02 00:49:11.519769 | orchestrator | 2026-02-02 00:49:11.519776 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-02 00:49:11.519784 | orchestrator | 2026-02-02 00:49:11.519790 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-02 00:49:11.519797 | orchestrator | Monday 02 February 2026 00:47:42 +0000 (0:00:00.466) 0:00:00.466 ******* 2026-02-02 00:49:11.519812 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-02 00:49:11.519818 | orchestrator | 2026-02-02 00:49:11.519822 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-02 00:49:11.519839 | orchestrator | Monday 02 February 2026 00:47:43 +0000 (0:00:00.523) 0:00:00.990 ******* 2026-02-02 00:49:11.519844 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-02 00:49:11.519848 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-02 00:49:11.519853 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-02 00:49:11.519857 | orchestrator | 2026-02-02 00:49:11.519861 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-02 00:49:11.519865 | orchestrator | Monday 02 February 2026 00:47:45 +0000 (0:00:01.957) 0:00:02.948 ******* 2026-02-02 00:49:11.519869 | orchestrator | changed: [testbed-manager] 2026-02-02 00:49:11.519873 | orchestrator | 2026-02-02 00:49:11.519877 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-02 00:49:11.519881 | orchestrator | Monday 02 February 2026 00:47:47 +0000 (0:00:02.457) 0:00:05.406 ******* 2026-02-02 00:49:11.519885 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-02 00:49:11.519891 | orchestrator | ok: [testbed-manager] 2026-02-02 00:49:11.519897 | orchestrator | 2026-02-02 00:49:11.519900 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-02 00:49:11.519904 | orchestrator | Monday 02 February 2026 00:48:19 +0000 (0:00:32.223) 0:00:37.629 ******* 2026-02-02 00:49:11.519908 | orchestrator | changed: [testbed-manager] 2026-02-02 00:49:11.519912 | orchestrator | 2026-02-02 00:49:11.519916 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-02 00:49:11.519920 | orchestrator | Monday 02 February 2026 00:48:23 +0000 (0:00:03.383) 0:00:41.013 ******* 2026-02-02 00:49:11.519924 | orchestrator | ok: [testbed-manager] 2026-02-02 00:49:11.519928 | orchestrator | 2026-02-02 00:49:11.519932 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-02 00:49:11.519936 | orchestrator | Monday 02 February 2026 00:48:23 +0000 (0:00:00.724) 0:00:41.737 ******* 2026-02-02 00:49:11.519954 | orchestrator | changed: [testbed-manager] 2026-02-02 00:49:11.519958 | orchestrator | 2026-02-02 00:49:11.519962 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-02 00:49:11.519966 | orchestrator | Monday 02 February 2026 00:48:25 +0000 (0:00:01.671) 0:00:43.409 ******* 2026-02-02 00:49:11.519969 | orchestrator | changed: [testbed-manager] 2026-02-02 00:49:11.519975 | orchestrator | 2026-02-02 00:49:11.519982 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-02 00:49:11.519988 | orchestrator | Monday 02 February 2026 00:48:26 +0000 (0:00:00.728) 0:00:44.138 ******* 2026-02-02 00:49:11.519994 | orchestrator | changed: [testbed-manager] 2026-02-02 00:49:11.519998 | orchestrator | 2026-02-02 00:49:11.520009 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-02 00:49:11.520013 | orchestrator | Monday 02 February 2026 00:48:26 +0000 (0:00:00.597) 0:00:44.735 ******* 2026-02-02 00:49:11.520017 | orchestrator | ok: [testbed-manager] 2026-02-02 00:49:11.520021 | orchestrator | 2026-02-02 00:49:11.520024 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:49:11.520028 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:49:11.520033 | orchestrator | 2026-02-02 00:49:11.520037 | orchestrator | 2026-02-02 00:49:11.520041 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:49:11.520045 | orchestrator | Monday 02 February 2026 00:48:27 +0000 (0:00:00.698) 0:00:45.434 ******* 2026-02-02 00:49:11.520048 | orchestrator | =============================================================================== 2026-02-02 00:49:11.520052 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.22s 2026-02-02 00:49:11.520056 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 3.38s 2026-02-02 00:49:11.520060 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.46s 2026-02-02 00:49:11.520064 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.96s 2026-02-02 00:49:11.520071 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.67s 2026-02-02 00:49:11.520077 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.73s 2026-02-02 00:49:11.520083 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.72s 2026-02-02 00:49:11.520090 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.70s 2026-02-02 00:49:11.520097 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.60s 2026-02-02 00:49:11.520104 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.52s 2026-02-02 00:49:11.520111 | orchestrator | 2026-02-02 00:49:11.520222 | orchestrator | 2026-02-02 00:49:11.520231 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-02-02 00:49:11.520235 | orchestrator | 2026-02-02 00:49:11.520239 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-02-02 00:49:11.520243 | orchestrator | Monday 02 February 2026 00:47:44 +0000 (0:00:00.715) 0:00:00.715 ******* 2026-02-02 00:49:11.520247 | orchestrator | ok: [testbed-manager] => { 2026-02-02 00:49:11.520256 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-02-02 00:49:11.520260 | orchestrator | } 2026-02-02 00:49:11.520264 | orchestrator | 2026-02-02 00:49:11.520268 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-02-02 00:49:11.520272 | orchestrator | Monday 02 February 2026 00:47:45 +0000 (0:00:01.076) 0:00:01.791 ******* 2026-02-02 00:49:11.520275 | orchestrator | ok: [testbed-manager] 2026-02-02 00:49:11.520280 | orchestrator | 2026-02-02 00:49:11.520283 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-02-02 00:49:11.520287 | orchestrator | Monday 02 February 2026 00:47:47 +0000 (0:00:02.398) 0:00:04.190 ******* 2026-02-02 00:49:11.520306 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-02-02 00:49:11.520316 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-02-02 00:49:11.520320 | orchestrator | 2026-02-02 00:49:11.520324 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-02-02 00:49:11.520328 | orchestrator | Monday 02 February 2026 00:47:48 +0000 (0:00:01.290) 0:00:05.481 ******* 2026-02-02 00:49:11.520332 | orchestrator | changed: [testbed-manager] 2026-02-02 00:49:11.520336 | orchestrator | 2026-02-02 00:49:11.520340 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-02-02 00:49:11.520343 | orchestrator | Monday 02 February 2026 00:47:51 +0000 (0:00:02.243) 0:00:07.725 ******* 2026-02-02 00:49:11.520347 | orchestrator | changed: [testbed-manager] 2026-02-02 00:49:11.520351 | orchestrator | 2026-02-02 00:49:11.520355 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-02-02 00:49:11.520359 | orchestrator | Monday 02 February 2026 00:47:56 +0000 (0:00:05.023) 0:00:12.748 ******* 2026-02-02 00:49:11.520363 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-02-02 00:49:11.520367 | orchestrator | ok: [testbed-manager] 2026-02-02 00:49:11.520370 | orchestrator | 2026-02-02 00:49:11.520374 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-02-02 00:49:11.520380 | orchestrator | Monday 02 February 2026 00:48:23 +0000 (0:00:27.563) 0:00:40.312 ******* 2026-02-02 00:49:11.520386 | orchestrator | changed: [testbed-manager] 2026-02-02 00:49:11.520391 | orchestrator | 2026-02-02 00:49:11.520395 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:49:11.520399 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:49:11.520403 | orchestrator | 2026-02-02 00:49:11.520407 | orchestrator | 2026-02-02 00:49:11.520411 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:49:11.520415 | orchestrator | Monday 02 February 2026 00:48:25 +0000 (0:00:01.575) 0:00:41.887 ******* 2026-02-02 00:49:11.520419 | orchestrator | =============================================================================== 2026-02-02 00:49:11.520422 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.56s 2026-02-02 00:49:11.520426 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 5.02s 2026-02-02 00:49:11.520430 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.39s 2026-02-02 00:49:11.520434 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.24s 2026-02-02 00:49:11.520441 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.58s 2026-02-02 00:49:11.520445 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.29s 2026-02-02 00:49:11.520449 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 1.08s 2026-02-02 00:49:11.520453 | orchestrator | 2026-02-02 00:49:11.520457 | orchestrator | 2026-02-02 00:49:11.520461 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 00:49:11.520465 | orchestrator | 2026-02-02 00:49:11.520468 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 00:49:11.520473 | orchestrator | Monday 02 February 2026 00:47:45 +0000 (0:00:00.807) 0:00:00.807 ******* 2026-02-02 00:49:11.520479 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-02 00:49:11.520486 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-02 00:49:11.520492 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-02 00:49:11.520498 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-02 00:49:11.520504 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-02 00:49:11.520510 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-02 00:49:11.520517 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-02 00:49:11.520528 | orchestrator | 2026-02-02 00:49:11.520535 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-02 00:49:11.520539 | orchestrator | 2026-02-02 00:49:11.520542 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-02 00:49:11.520546 | orchestrator | Monday 02 February 2026 00:47:46 +0000 (0:00:01.365) 0:00:02.173 ******* 2026-02-02 00:49:11.520556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2026-02-02 00:49:11.520561 | orchestrator | 2026-02-02 00:49:11.520565 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-02 00:49:11.520569 | orchestrator | Monday 02 February 2026 00:47:48 +0000 (0:00:01.816) 0:00:03.990 ******* 2026-02-02 00:49:11.520573 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:49:11.520577 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:49:11.520582 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:49:11.520588 | orchestrator | ok: [testbed-manager] 2026-02-02 00:49:11.520594 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:49:11.520606 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:49:11.520610 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:49:11.520614 | orchestrator | 2026-02-02 00:49:11.520620 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-02 00:49:11.520625 | orchestrator | Monday 02 February 2026 00:47:51 +0000 (0:00:02.617) 0:00:06.607 ******* 2026-02-02 00:49:11.520629 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:49:11.520633 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:49:11.520637 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:49:11.520641 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:49:11.520645 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:49:11.520651 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:49:11.520657 | orchestrator | ok: [testbed-manager] 2026-02-02 00:49:11.520660 | orchestrator | 2026-02-02 00:49:11.520664 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-02 00:49:11.520670 | orchestrator | Monday 02 February 2026 00:47:55 +0000 (0:00:04.114) 0:00:10.722 ******* 2026-02-02 00:49:11.520675 | orchestrator | changed: [testbed-manager] 2026-02-02 00:49:11.520679 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:49:11.520683 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:49:11.520687 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:49:11.520691 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:49:11.520694 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:49:11.520698 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:49:11.520702 | orchestrator | 2026-02-02 00:49:11.520706 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-02 00:49:11.520710 | orchestrator | Monday 02 February 2026 00:47:58 +0000 (0:00:02.729) 0:00:13.452 ******* 2026-02-02 00:49:11.520714 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:49:11.520717 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:49:11.520721 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:49:11.520725 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:49:11.520729 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:49:11.520732 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:49:11.520736 | orchestrator | changed: [testbed-manager] 2026-02-02 00:49:11.520740 | orchestrator | 2026-02-02 00:49:11.520744 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-02 00:49:11.520748 | orchestrator | Monday 02 February 2026 00:48:09 +0000 (0:00:11.023) 0:00:24.476 ******* 2026-02-02 00:49:11.520751 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:49:11.520755 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:49:11.520759 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:49:11.520763 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:49:11.520767 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:49:11.520774 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:49:11.520777 | orchestrator | changed: [testbed-manager] 2026-02-02 00:49:11.520781 | orchestrator | 2026-02-02 00:49:11.520787 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-02 00:49:11.520793 | orchestrator | Monday 02 February 2026 00:48:43 +0000 (0:00:34.594) 0:00:59.070 ******* 2026-02-02 00:49:11.520801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:49:11.520808 | orchestrator | 2026-02-02 00:49:11.520815 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-02 00:49:11.520821 | orchestrator | Monday 02 February 2026 00:48:45 +0000 (0:00:01.993) 0:01:01.064 ******* 2026-02-02 00:49:11.520856 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-02 00:49:11.520864 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-02 00:49:11.520870 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-02 00:49:11.520877 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-02 00:49:11.520884 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-02 00:49:11.520891 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-02 00:49:11.520898 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-02 00:49:11.520905 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-02 00:49:11.520911 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-02 00:49:11.520918 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-02 00:49:11.520924 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-02 00:49:11.520931 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-02 00:49:11.520938 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-02 00:49:11.521015 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-02 00:49:11.521023 | orchestrator | 2026-02-02 00:49:11.521038 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-02 00:49:11.521047 | orchestrator | Monday 02 February 2026 00:48:51 +0000 (0:00:05.700) 0:01:06.765 ******* 2026-02-02 00:49:11.521053 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:49:11.521066 | orchestrator | ok: [testbed-manager] 2026-02-02 00:49:11.521074 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:49:11.521078 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:49:11.521082 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:49:11.521086 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:49:11.521090 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:49:11.521093 | orchestrator | 2026-02-02 00:49:11.521097 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-02 00:49:11.521101 | orchestrator | Monday 02 February 2026 00:48:52 +0000 (0:00:01.527) 0:01:08.292 ******* 2026-02-02 00:49:11.521105 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:49:11.521109 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:49:11.521113 | orchestrator | changed: [testbed-manager] 2026-02-02 00:49:11.521116 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:49:11.521120 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:49:11.521124 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:49:11.521128 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:49:11.521137 | orchestrator | 2026-02-02 00:49:11.521144 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-02 00:49:11.521153 | orchestrator | Monday 02 February 2026 00:48:55 +0000 (0:00:02.662) 0:01:10.955 ******* 2026-02-02 00:49:11.521168 | orchestrator | ok: [testbed-manager] 2026-02-02 00:49:11.521174 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:49:11.521180 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:49:11.521186 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:49:11.521193 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:49:11.521206 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:49:11.521213 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:49:11.521219 | orchestrator | 2026-02-02 00:49:11.521225 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-02 00:49:11.521232 | orchestrator | Monday 02 February 2026 00:48:57 +0000 (0:00:01.880) 0:01:12.836 ******* 2026-02-02 00:49:11.521239 | orchestrator | ok: [testbed-manager] 2026-02-02 00:49:11.521245 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:49:11.521250 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:49:11.521254 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:49:11.521258 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:49:11.521262 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:49:11.521265 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:49:11.521269 | orchestrator | 2026-02-02 00:49:11.521274 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-02 00:49:11.521281 | orchestrator | Monday 02 February 2026 00:49:01 +0000 (0:00:03.676) 0:01:16.512 ******* 2026-02-02 00:49:11.521288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-02 00:49:11.521295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:49:11.521303 | orchestrator | 2026-02-02 00:49:11.521318 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-02 00:49:11.521324 | orchestrator | Monday 02 February 2026 00:49:02 +0000 (0:00:01.595) 0:01:18.108 ******* 2026-02-02 00:49:11.521327 | orchestrator | changed: [testbed-manager] 2026-02-02 00:49:11.521331 | orchestrator | 2026-02-02 00:49:11.521335 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-02 00:49:11.521339 | orchestrator | Monday 02 February 2026 00:49:04 +0000 (0:00:01.799) 0:01:19.907 ******* 2026-02-02 00:49:11.521343 | orchestrator | changed: [testbed-manager] 2026-02-02 00:49:11.521352 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:49:11.521356 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:49:11.521360 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:49:11.521364 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:49:11.521368 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:49:11.521376 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:49:11.521380 | orchestrator | 2026-02-02 00:49:11.521384 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:49:11.521388 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:49:11.521392 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:49:11.521396 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:49:11.521400 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:49:11.521404 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:49:11.521408 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:49:11.521412 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:49:11.521416 | orchestrator | 2026-02-02 00:49:11.521420 | orchestrator | 2026-02-02 00:49:11.521424 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:49:11.521427 | orchestrator | Monday 02 February 2026 00:49:08 +0000 (0:00:03.767) 0:01:23.675 ******* 2026-02-02 00:49:11.521435 | orchestrator | =============================================================================== 2026-02-02 00:49:11.521438 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 34.59s 2026-02-02 00:49:11.521442 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.02s 2026-02-02 00:49:11.521446 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.70s 2026-02-02 00:49:11.521450 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.11s 2026-02-02 00:49:11.521454 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.77s 2026-02-02 00:49:11.521458 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.68s 2026-02-02 00:49:11.521462 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.73s 2026-02-02 00:49:11.521466 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.66s 2026-02-02 00:49:11.521489 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.62s 2026-02-02 00:49:11.521496 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.99s 2026-02-02 00:49:11.521503 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.88s 2026-02-02 00:49:11.521513 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.82s 2026-02-02 00:49:11.521520 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.80s 2026-02-02 00:49:11.521527 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.60s 2026-02-02 00:49:11.521534 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.53s 2026-02-02 00:49:11.521538 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.37s 2026-02-02 00:49:11.521542 | orchestrator | 2026-02-02 00:49:11 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:49:11.522581 | orchestrator | 2026-02-02 00:49:11 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:11.523237 | orchestrator | 2026-02-02 00:49:11 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:11.523380 | orchestrator | 2026-02-02 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:14.569102 | orchestrator | 2026-02-02 00:49:14 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:14.569682 | orchestrator | 2026-02-02 00:49:14 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:49:14.572677 | orchestrator | 2026-02-02 00:49:14 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:14.575188 | orchestrator | 2026-02-02 00:49:14 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:14.575433 | orchestrator | 2026-02-02 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:17.615143 | orchestrator | 2026-02-02 00:49:17 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:17.615575 | orchestrator | 2026-02-02 00:49:17 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:49:17.617075 | orchestrator | 2026-02-02 00:49:17 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:17.618010 | orchestrator | 2026-02-02 00:49:17 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:17.618288 | orchestrator | 2026-02-02 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:20.658100 | orchestrator | 2026-02-02 00:49:20 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:20.658169 | orchestrator | 2026-02-02 00:49:20 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:49:20.660672 | orchestrator | 2026-02-02 00:49:20 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:20.660691 | orchestrator | 2026-02-02 00:49:20 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:20.661130 | orchestrator | 2026-02-02 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:23.718568 | orchestrator | 2026-02-02 00:49:23 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:23.719020 | orchestrator | 2026-02-02 00:49:23 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state STARTED 2026-02-02 00:49:23.719955 | orchestrator | 2026-02-02 00:49:23 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:23.720747 | orchestrator | 2026-02-02 00:49:23 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:23.720776 | orchestrator | 2026-02-02 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:26.757042 | orchestrator | 2026-02-02 00:49:26 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:26.757128 | orchestrator | 2026-02-02 00:49:26 | INFO  | Task 6e9a5419-f33f-4c82-8309-ed00d7829f22 is in state SUCCESS 2026-02-02 00:49:26.757849 | orchestrator | 2026-02-02 00:49:26 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:26.758419 | orchestrator | 2026-02-02 00:49:26 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:26.758460 | orchestrator | 2026-02-02 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:29.796747 | orchestrator | 2026-02-02 00:49:29 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:29.799488 | orchestrator | 2026-02-02 00:49:29 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:29.800500 | orchestrator | 2026-02-02 00:49:29 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:29.800527 | orchestrator | 2026-02-02 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:32.848216 | orchestrator | 2026-02-02 00:49:32 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:32.852397 | orchestrator | 2026-02-02 00:49:32 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:32.852875 | orchestrator | 2026-02-02 00:49:32 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:32.852901 | orchestrator | 2026-02-02 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:35.897598 | orchestrator | 2026-02-02 00:49:35 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:35.897670 | orchestrator | 2026-02-02 00:49:35 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:35.898605 | orchestrator | 2026-02-02 00:49:35 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:35.898635 | orchestrator | 2026-02-02 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:38.950506 | orchestrator | 2026-02-02 00:49:38 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:38.953543 | orchestrator | 2026-02-02 00:49:38 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:38.955548 | orchestrator | 2026-02-02 00:49:38 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:38.955618 | orchestrator | 2026-02-02 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:42.026273 | orchestrator | 2026-02-02 00:49:42 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:42.027164 | orchestrator | 2026-02-02 00:49:42 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:42.028563 | orchestrator | 2026-02-02 00:49:42 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:42.028611 | orchestrator | 2026-02-02 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:45.076280 | orchestrator | 2026-02-02 00:49:45 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:45.078628 | orchestrator | 2026-02-02 00:49:45 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:45.078721 | orchestrator | 2026-02-02 00:49:45 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:45.078758 | orchestrator | 2026-02-02 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:48.108914 | orchestrator | 2026-02-02 00:49:48 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:48.110277 | orchestrator | 2026-02-02 00:49:48 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:48.112167 | orchestrator | 2026-02-02 00:49:48 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:48.112223 | orchestrator | 2026-02-02 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:51.149473 | orchestrator | 2026-02-02 00:49:51 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:51.149944 | orchestrator | 2026-02-02 00:49:51 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:51.150883 | orchestrator | 2026-02-02 00:49:51 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:51.151206 | orchestrator | 2026-02-02 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:54.190655 | orchestrator | 2026-02-02 00:49:54 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:54.194698 | orchestrator | 2026-02-02 00:49:54 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:54.195134 | orchestrator | 2026-02-02 00:49:54 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:54.195249 | orchestrator | 2026-02-02 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:49:57.243918 | orchestrator | 2026-02-02 00:49:57 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:49:57.245381 | orchestrator | 2026-02-02 00:49:57 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:49:57.246994 | orchestrator | 2026-02-02 00:49:57 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:49:57.247186 | orchestrator | 2026-02-02 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:00.289183 | orchestrator | 2026-02-02 00:50:00 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:50:00.289713 | orchestrator | 2026-02-02 00:50:00 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:00.294577 | orchestrator | 2026-02-02 00:50:00 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:00.294624 | orchestrator | 2026-02-02 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:03.334211 | orchestrator | 2026-02-02 00:50:03 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:50:03.335483 | orchestrator | 2026-02-02 00:50:03 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:03.336112 | orchestrator | 2026-02-02 00:50:03 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:03.336137 | orchestrator | 2026-02-02 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:06.384958 | orchestrator | 2026-02-02 00:50:06 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:50:06.386936 | orchestrator | 2026-02-02 00:50:06 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:06.388721 | orchestrator | 2026-02-02 00:50:06 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:06.388822 | orchestrator | 2026-02-02 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:09.427111 | orchestrator | [32m2026-02-02 00:50:09 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state STARTED 2026-02-02 00:50:09.428080 | orchestrator | 2026-02-02 00:50:09 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:09.429055 | orchestrator | 2026-02-02 00:50:09 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:09.429107 | orchestrator | 2026-02-02 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:12.481206 | orchestrator | 2026-02-02 00:50:12 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:12.491702 | orchestrator | 2026-02-02 00:50:12.491821 | orchestrator | 2026-02-02 00:50:12.491837 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-02-02 00:50:12.491851 | orchestrator | 2026-02-02 00:50:12.491862 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-02-02 00:50:12.491874 | orchestrator | Monday 02 February 2026 00:48:03 +0000 (0:00:00.189) 0:00:00.189 ******* 2026-02-02 00:50:12.491886 | orchestrator | ok: [testbed-manager] 2026-02-02 00:50:12.491901 | orchestrator | 2026-02-02 00:50:12.492042 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-02-02 00:50:12.492072 | orchestrator | Monday 02 February 2026 00:48:04 +0000 (0:00:01.079) 0:00:01.268 ******* 2026-02-02 00:50:12.492091 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-02-02 00:50:12.492110 | orchestrator | 2026-02-02 00:50:12.492228 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-02-02 00:50:12.492254 | orchestrator | Monday 02 February 2026 00:48:05 +0000 (0:00:01.113) 0:00:02.382 ******* 2026-02-02 00:50:12.492276 | orchestrator | changed: [testbed-manager] 2026-02-02 00:50:12.492298 | orchestrator | 2026-02-02 00:50:12.492314 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-02-02 00:50:12.492326 | orchestrator | Monday 02 February 2026 00:48:07 +0000 (0:00:01.735) 0:00:04.117 ******* 2026-02-02 00:50:12.492340 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-02-02 00:50:12.492354 | orchestrator | ok: [testbed-manager] 2026-02-02 00:50:12.492366 | orchestrator | 2026-02-02 00:50:12.492380 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-02-02 00:50:12.492392 | orchestrator | Monday 02 February 2026 00:49:21 +0000 (0:01:13.840) 0:01:17.958 ******* 2026-02-02 00:50:12.492405 | orchestrator | changed: [testbed-manager] 2026-02-02 00:50:12.492418 | orchestrator | 2026-02-02 00:50:12.492431 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:50:12.492445 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:50:12.492486 | orchestrator | 2026-02-02 00:50:12.492498 | orchestrator | 2026-02-02 00:50:12.492509 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:50:12.492520 | orchestrator | Monday 02 February 2026 00:49:25 +0000 (0:00:03.892) 0:01:21.850 ******* 2026-02-02 00:50:12.492531 | orchestrator | =============================================================================== 2026-02-02 00:50:12.492541 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 73.84s 2026-02-02 00:50:12.492552 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.89s 2026-02-02 00:50:12.492563 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.74s 2026-02-02 00:50:12.492573 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.11s 2026-02-02 00:50:12.492584 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.08s 2026-02-02 00:50:12.492602 | orchestrator | 2026-02-02 00:50:12.492934 | orchestrator | 2026-02-02 00:50:12 | INFO  | Task b82f7901-fa24-4f7d-b63e-1530f7762719 is in state SUCCESS 2026-02-02 00:50:12.494350 | orchestrator | 2026-02-02 00:50:12.494481 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-02 00:50:12.494507 | orchestrator | 2026-02-02 00:50:12.494529 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-02 00:50:12.494550 | orchestrator | Monday 02 February 2026 00:47:35 +0000 (0:00:00.259) 0:00:00.259 ******* 2026-02-02 00:50:12.494570 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:50:12.494593 | orchestrator | 2026-02-02 00:50:12.494615 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-02 00:50:12.494636 | orchestrator | Monday 02 February 2026 00:47:36 +0000 (0:00:01.417) 0:00:01.677 ******* 2026-02-02 00:50:12.494658 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 00:50:12.494681 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 00:50:12.494704 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 00:50:12.494727 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 00:50:12.494809 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 00:50:12.494834 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 00:50:12.494857 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 00:50:12.494880 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 00:50:12.494902 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 00:50:12.494925 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 00:50:12.494948 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 00:50:12.494971 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 00:50:12.494994 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 00:50:12.495015 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 00:50:12.495038 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 00:50:12.495061 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 00:50:12.495085 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 00:50:12.495148 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 00:50:12.495186 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 00:50:12.495207 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 00:50:12.495229 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 00:50:12.495252 | orchestrator | 2026-02-02 00:50:12.495276 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-02 00:50:12.495297 | orchestrator | Monday 02 February 2026 00:47:41 +0000 (0:00:04.576) 0:00:06.254 ******* 2026-02-02 00:50:12.495320 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:50:12.495344 | orchestrator | 2026-02-02 00:50:12.495368 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-02 00:50:12.495389 | orchestrator | Monday 02 February 2026 00:47:43 +0000 (0:00:01.460) 0:00:07.715 ******* 2026-02-02 00:50:12.495417 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.495445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.495499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.495524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.495548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.495572 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.495618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.495643 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.495665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.495713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.495737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.495789 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.495823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.495852 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.495873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.495893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.495912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.495957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.495978 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.495996 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.496014 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.496044 | orchestrator | 2026-02-02 00:50:12.496065 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-02 00:50:12.496083 | orchestrator | Monday 02 February 2026 00:47:48 +0000 (0:00:05.985) 0:00:13.700 ******* 2026-02-02 00:50:12.496104 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.496137 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496151 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.496174 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:50:12.496198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.496219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.496232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.496292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496341 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:50:12.496352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.496371 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:50:12.496382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496393 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:50:12.496405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496432 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:50:12.496444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.496456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496505 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:50:12.496516 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:50:12.496527 | orchestrator | 2026-02-02 00:50:12.496539 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-02 00:50:12.496550 | orchestrator | Monday 02 February 2026 00:47:52 +0000 (0:00:03.089) 0:00:16.790 ******* 2026-02-02 00:50:12.496562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.496574 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.496590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496611 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496621 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:50:12.496631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.496647 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496679 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:50:12.496689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.496700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496710 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:50:12.496724 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.496735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.496797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496826 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:50:12.496836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496857 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:50:12.496867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.496882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496893 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:50:12.496904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.496931 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:50:12.496941 | orchestrator | 2026-02-02 00:50:12.496951 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-02 00:50:12.496961 | orchestrator | Monday 02 February 2026 00:47:57 +0000 (0:00:05.108) 0:00:21.899 ******* 2026-02-02 00:50:12.496971 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:50:12.496981 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:50:12.496991 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:50:12.497000 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:50:12.497010 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:50:12.497025 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:50:12.497038 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:50:12.497054 | orchestrator | 2026-02-02 00:50:12.497069 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-02 00:50:12.497085 | orchestrator | Monday 02 February 2026 00:47:58 +0000 (0:00:01.365) 0:00:23.264 ******* 2026-02-02 00:50:12.497103 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:50:12.497119 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:50:12.497138 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:50:12.497148 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:50:12.497158 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:50:12.497168 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:50:12.497177 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:50:12.497187 | orchestrator | 2026-02-02 00:50:12.497197 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-02 00:50:12.497206 | orchestrator | Monday 02 February 2026 00:48:00 +0000 (0:00:01.731) 0:00:24.996 ******* 2026-02-02 00:50:12.497216 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:50:12.497226 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:50:12.497235 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:50:12.497246 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:50:12.497261 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:50:12.497278 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:50:12.497293 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:50:12.497307 | orchestrator | 2026-02-02 00:50:12.497322 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-02 00:50:12.497338 | orchestrator | Monday 02 February 2026 00:48:01 +0000 (0:00:01.266) 0:00:26.263 ******* 2026-02-02 00:50:12.497355 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:50:12.497372 | orchestrator | changed: [testbed-manager] 2026-02-02 00:50:12.497388 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:50:12.497400 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:50:12.497410 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:50:12.497419 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:50:12.497429 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:50:12.497439 | orchestrator | 2026-02-02 00:50:12.497449 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-02 00:50:12.497459 | orchestrator | Monday 02 February 2026 00:48:04 +0000 (0:00:03.157) 0:00:29.421 ******* 2026-02-02 00:50:12.497469 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.497485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.497503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.497514 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.497524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.497541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.497552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.497563 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.497573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.497589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.497600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.497610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.497628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.497639 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.497654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.497666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.497686 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.497696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.497706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.497717 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.497733 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.497743 | orchestrator | 2026-02-02 00:50:12.497804 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-02 00:50:12.497816 | orchestrator | Monday 02 February 2026 00:48:09 +0000 (0:00:05.194) 0:00:34.615 ******* 2026-02-02 00:50:12.497826 | orchestrator | [WARNING]: Skipped 2026-02-02 00:50:12.497839 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-02 00:50:12.497850 | orchestrator | to this access issue: 2026-02-02 00:50:12.497861 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-02 00:50:12.497872 | orchestrator | directory 2026-02-02 00:50:12.497883 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 00:50:12.497895 | orchestrator | 2026-02-02 00:50:12.497906 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-02 00:50:12.497917 | orchestrator | Monday 02 February 2026 00:48:11 +0000 (0:00:01.229) 0:00:35.844 ******* 2026-02-02 00:50:12.497928 | orchestrator | [WARNING]: Skipped 2026-02-02 00:50:12.497938 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-02 00:50:12.497950 | orchestrator | to this access issue: 2026-02-02 00:50:12.497961 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-02 00:50:12.497972 | orchestrator | directory 2026-02-02 00:50:12.497983 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 00:50:12.498001 | orchestrator | 2026-02-02 00:50:12.498068 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-02 00:50:12.498097 | orchestrator | Monday 02 February 2026 00:48:12 +0000 (0:00:01.316) 0:00:37.161 ******* 2026-02-02 00:50:12.498114 | orchestrator | [WARNING]: Skipped 2026-02-02 00:50:12.498125 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-02 00:50:12.498136 | orchestrator | to this access issue: 2026-02-02 00:50:12.498147 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-02 00:50:12.498158 | orchestrator | directory 2026-02-02 00:50:12.498169 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 00:50:12.498180 | orchestrator | 2026-02-02 00:50:12.498191 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-02 00:50:12.498202 | orchestrator | Monday 02 February 2026 00:48:13 +0000 (0:00:01.043) 0:00:38.204 ******* 2026-02-02 00:50:12.498213 | orchestrator | [WARNING]: Skipped 2026-02-02 00:50:12.498227 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-02 00:50:12.498245 | orchestrator | to this access issue: 2026-02-02 00:50:12.498257 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-02 00:50:12.498268 | orchestrator | directory 2026-02-02 00:50:12.498279 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 00:50:12.498290 | orchestrator | 2026-02-02 00:50:12.498302 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-02 00:50:12.498313 | orchestrator | Monday 02 February 2026 00:48:14 +0000 (0:00:01.175) 0:00:39.380 ******* 2026-02-02 00:50:12.498324 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:50:12.498335 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:50:12.498351 | orchestrator | changed: [testbed-manager] 2026-02-02 00:50:12.498363 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:50:12.498374 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:50:12.498385 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:50:12.498395 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:50:12.498406 | orchestrator | 2026-02-02 00:50:12.498417 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-02 00:50:12.498429 | orchestrator | Monday 02 February 2026 00:48:20 +0000 (0:00:05.525) 0:00:44.905 ******* 2026-02-02 00:50:12.498440 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 00:50:12.498451 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 00:50:12.498465 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 00:50:12.498482 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 00:50:12.498496 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 00:50:12.498514 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 00:50:12.498526 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 00:50:12.498537 | orchestrator | 2026-02-02 00:50:12.498548 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-02 00:50:12.498559 | orchestrator | Monday 02 February 2026 00:48:23 +0000 (0:00:03.615) 0:00:48.521 ******* 2026-02-02 00:50:12.498570 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:50:12.498581 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:50:12.498592 | orchestrator | changed: [testbed-manager] 2026-02-02 00:50:12.498603 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:50:12.498614 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:50:12.498625 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:50:12.498636 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:50:12.498647 | orchestrator | 2026-02-02 00:50:12.498658 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-02 00:50:12.498679 | orchestrator | Monday 02 February 2026 00:48:26 +0000 (0:00:02.957) 0:00:51.479 ******* 2026-02-02 00:50:12.498703 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.498717 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.498729 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.498741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.498788 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.498805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.498817 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.498849 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.498862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.498873 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.498885 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.498897 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.498913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.498925 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.498937 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.498962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.498974 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.498986 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.498997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.499009 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499026 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499038 | orchestrator | 2026-02-02 00:50:12.499049 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-02 00:50:12.499060 | orchestrator | Monday 02 February 2026 00:48:29 +0000 (0:00:02.386) 0:00:53.865 ******* 2026-02-02 00:50:12.499071 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 00:50:12.499082 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 00:50:12.499093 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 00:50:12.499104 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 00:50:12.499122 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 00:50:12.499133 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 00:50:12.499144 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 00:50:12.499155 | orchestrator | 2026-02-02 00:50:12.499166 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-02 00:50:12.499177 | orchestrator | Monday 02 February 2026 00:48:31 +0000 (0:00:02.211) 0:00:56.077 ******* 2026-02-02 00:50:12.499190 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 00:50:12.499209 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 00:50:12.499226 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 00:50:12.499242 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 00:50:12.499260 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 00:50:12.499271 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 00:50:12.499282 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 00:50:12.499293 | orchestrator | 2026-02-02 00:50:12.499310 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-02 00:50:12.499322 | orchestrator | Monday 02 February 2026 00:48:34 +0000 (0:00:03.132) 0:00:59.210 ******* 2026-02-02 00:50:12.499333 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.499345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.499357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.499373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.499385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499415 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.499456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.499468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499515 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 00:50:12.499540 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499558 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499582 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499594 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:50:12.499649 | orchestrator | 2026-02-02 00:50:12.499661 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-02 00:50:12.499672 | orchestrator | Monday 02 February 2026 00:48:38 +0000 (0:00:04.435) 0:01:03.646 ******* 2026-02-02 00:50:12.499683 | orchestrator | changed: [testbed-manager] => { 2026-02-02 00:50:12.499694 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:50:12.499706 | orchestrator | } 2026-02-02 00:50:12.499717 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 00:50:12.499728 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:50:12.499739 | orchestrator | } 2026-02-02 00:50:12.499804 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 00:50:12.499820 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:50:12.499831 | orchestrator | } 2026-02-02 00:50:12.499842 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 00:50:12.499853 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:50:12.499864 | orchestrator | } 2026-02-02 00:50:12.499875 | orchestrator | changed: [testbed-node-3] => { 2026-02-02 00:50:12.499886 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:50:12.499897 | orchestrator | } 2026-02-02 00:50:12.499908 | orchestrator | changed: [testbed-node-4] => { 2026-02-02 00:50:12.499919 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:50:12.499930 | orchestrator | } 2026-02-02 00:50:12.499941 | orchestrator | changed: [testbed-node-5] => { 2026-02-02 00:50:12.499952 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:50:12.499964 | orchestrator | } 2026-02-02 00:50:12.499975 | orchestrator | 2026-02-02 00:50:12.499986 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 00:50:12.499997 | orchestrator | Monday 02 February 2026 00:48:40 +0000 (0:00:01.493) 0:01:05.140 ******* 2026-02-02 00:50:12.500016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.500029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.500041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.500059 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.500069 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.500080 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.500090 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:50:12.500100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.500116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.500132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.500158 | orchestrator | skipping: [testbed-manager] 2026-02-02 00:50:12.500169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.500198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.500209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.500219 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:50:12.500234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.500245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.500255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.500266 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:50:12.500276 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:50:12.500292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.500303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.500323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.500334 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:50:12.500344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 00:50:12.500359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.500370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:50:12.500380 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:50:12.500390 | orchestrator | 2026-02-02 00:50:12.500400 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-02 00:50:12.500410 | orchestrator | Monday 02 February 2026 00:48:44 +0000 (0:00:03.591) 0:01:08.732 ******* 2026-02-02 00:50:12.500420 | orchestrator | changed: [testbed-manager] 2026-02-02 00:50:12.500430 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:50:12.500440 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:50:12.500450 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:50:12.500460 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:50:12.500470 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:50:12.500480 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:50:12.500490 | orchestrator | 2026-02-02 00:50:12.500500 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-02 00:50:12.500510 | orchestrator | Monday 02 February 2026 00:48:46 +0000 (0:00:02.707) 0:01:11.439 ******* 2026-02-02 00:50:12.500519 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:50:12.500529 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:50:12.500539 | orchestrator | changed: [testbed-manager] 2026-02-02 00:50:12.500549 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:50:12.500559 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:50:12.500569 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:50:12.500579 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:50:12.500588 | orchestrator | 2026-02-02 00:50:12.500599 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 00:50:12.500609 | orchestrator | Monday 02 February 2026 00:48:48 +0000 (0:00:02.151) 0:01:13.591 ******* 2026-02-02 00:50:12.500626 | orchestrator | 2026-02-02 00:50:12.500637 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 00:50:12.500647 | orchestrator | Monday 02 February 2026 00:48:48 +0000 (0:00:00.089) 0:01:13.681 ******* 2026-02-02 00:50:12.500657 | orchestrator | 2026-02-02 00:50:12.500667 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 00:50:12.500677 | orchestrator | Monday 02 February 2026 00:48:49 +0000 (0:00:00.102) 0:01:13.783 ******* 2026-02-02 00:50:12.500687 | orchestrator | 2026-02-02 00:50:12.500702 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 00:50:12.500712 | orchestrator | Monday 02 February 2026 00:48:49 +0000 (0:00:00.079) 0:01:13.863 ******* 2026-02-02 00:50:12.500722 | orchestrator | 2026-02-02 00:50:12.500732 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 00:50:12.500742 | orchestrator | Monday 02 February 2026 00:48:49 +0000 (0:00:00.322) 0:01:14.186 ******* 2026-02-02 00:50:12.500774 | orchestrator | 2026-02-02 00:50:12.500786 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 00:50:12.500796 | orchestrator | Monday 02 February 2026 00:48:49 +0000 (0:00:00.076) 0:01:14.262 ******* 2026-02-02 00:50:12.500806 | orchestrator | 2026-02-02 00:50:12.500816 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 00:50:12.500825 | orchestrator | Monday 02 February 2026 00:48:49 +0000 (0:00:00.074) 0:01:14.337 ******* 2026-02-02 00:50:12.500836 | orchestrator | 2026-02-02 00:50:12.500846 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-02 00:50:12.500856 | orchestrator | Monday 02 February 2026 00:48:49 +0000 (0:00:00.095) 0:01:14.432 ******* 2026-02-02 00:50:12.500865 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:50:12.500875 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:50:12.500885 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:50:12.500895 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:50:12.500905 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:50:12.500915 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:50:12.500925 | orchestrator | changed: [testbed-manager] 2026-02-02 00:50:12.500935 | orchestrator | 2026-02-02 00:50:12.500945 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-02 00:50:12.500954 | orchestrator | Monday 02 February 2026 00:49:23 +0000 (0:00:33.314) 0:01:47.746 ******* 2026-02-02 00:50:12.500964 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:50:12.500974 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:50:12.500984 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:50:12.500994 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:50:12.501003 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:50:12.501013 | orchestrator | changed: [testbed-manager] 2026-02-02 00:50:12.501023 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:50:12.501033 | orchestrator | 2026-02-02 00:50:12.501042 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-02 00:50:12.501052 | orchestrator | Monday 02 February 2026 00:49:58 +0000 (0:00:35.150) 0:02:22.897 ******* 2026-02-02 00:50:12.501062 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:50:12.501072 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:50:12.501082 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:50:12.501091 | orchestrator | ok: [testbed-manager] 2026-02-02 00:50:12.501101 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:50:12.501111 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:50:12.501120 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:50:12.501130 | orchestrator | 2026-02-02 00:50:12.501140 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-02 00:50:12.501150 | orchestrator | Monday 02 February 2026 00:50:00 +0000 (0:00:02.037) 0:02:24.935 ******* 2026-02-02 00:50:12.501160 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:50:12.501170 | orchestrator | changed: [testbed-manager] 2026-02-02 00:50:12.501180 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:50:12.501197 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:50:12.501207 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:50:12.501217 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:50:12.501232 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:50:12.501249 | orchestrator | 2026-02-02 00:50:12.501263 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:50:12.501282 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 00:50:12.501300 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 00:50:12.501316 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 00:50:12.501331 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 00:50:12.501341 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 00:50:12.501351 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 00:50:12.501361 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 00:50:12.501371 | orchestrator | 2026-02-02 00:50:12.501381 | orchestrator | 2026-02-02 00:50:12.501391 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:50:12.501401 | orchestrator | Monday 02 February 2026 00:50:09 +0000 (0:00:09.289) 0:02:34.224 ******* 2026-02-02 00:50:12.501411 | orchestrator | =============================================================================== 2026-02-02 00:50:12.501420 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.15s 2026-02-02 00:50:12.501430 | orchestrator | common : Restart fluentd container ------------------------------------- 33.31s 2026-02-02 00:50:12.501440 | orchestrator | common : Restart cron container ----------------------------------------- 9.29s 2026-02-02 00:50:12.501449 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.99s 2026-02-02 00:50:12.501466 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.53s 2026-02-02 00:50:12.501477 | orchestrator | common : Copying over config.json files for services -------------------- 5.19s 2026-02-02 00:50:12.501487 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 5.11s 2026-02-02 00:50:12.501497 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.58s 2026-02-02 00:50:12.501507 | orchestrator | service-check-containers : common | Check containers -------------------- 4.44s 2026-02-02 00:50:12.501517 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.62s 2026-02-02 00:50:12.501596 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.59s 2026-02-02 00:50:12.501607 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.16s 2026-02-02 00:50:12.501617 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.13s 2026-02-02 00:50:12.501627 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.09s 2026-02-02 00:50:12.501637 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.96s 2026-02-02 00:50:12.501646 | orchestrator | common : Creating log volume -------------------------------------------- 2.71s 2026-02-02 00:50:12.501656 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.39s 2026-02-02 00:50:12.501666 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.21s 2026-02-02 00:50:12.501684 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 2.15s 2026-02-02 00:50:12.501694 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.04s 2026-02-02 00:50:12.501704 | orchestrator | 2026-02-02 00:50:12 | INFO  | Task 94cf3f77-ed8f-48b9-a1e3-608f25d54905 is in state STARTED 2026-02-02 00:50:12.501715 | orchestrator | 2026-02-02 00:50:12 | INFO  | Task 7beb5736-4adb-4e03-b8ec-9e8c05df6279 is in state STARTED 2026-02-02 00:50:12.501725 | orchestrator | 2026-02-02 00:50:12 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:12.501736 | orchestrator | 2026-02-02 00:50:12 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:12.502677 | orchestrator | 2026-02-02 00:50:12 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:12.502799 | orchestrator | 2026-02-02 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:15.548802 | orchestrator | 2026-02-02 00:50:15 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:15.548876 | orchestrator | 2026-02-02 00:50:15 | INFO  | Task 94cf3f77-ed8f-48b9-a1e3-608f25d54905 is in state STARTED 2026-02-02 00:50:15.548901 | orchestrator | 2026-02-02 00:50:15 | INFO  | Task 7beb5736-4adb-4e03-b8ec-9e8c05df6279 is in state STARTED 2026-02-02 00:50:15.548912 | orchestrator | 2026-02-02 00:50:15 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:15.548922 | orchestrator | 2026-02-02 00:50:15 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:15.548929 | orchestrator | 2026-02-02 00:50:15 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:15.548935 | orchestrator | 2026-02-02 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:18.571138 | orchestrator | 2026-02-02 00:50:18 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:18.571645 | orchestrator | 2026-02-02 00:50:18 | INFO  | Task 94cf3f77-ed8f-48b9-a1e3-608f25d54905 is in state STARTED 2026-02-02 00:50:18.573405 | orchestrator | 2026-02-02 00:50:18 | INFO  | Task 7beb5736-4adb-4e03-b8ec-9e8c05df6279 is in state STARTED 2026-02-02 00:50:18.573994 | orchestrator | 2026-02-02 00:50:18 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:18.574784 | orchestrator | 2026-02-02 00:50:18 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:18.575216 | orchestrator | 2026-02-02 00:50:18 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:18.576194 | orchestrator | 2026-02-02 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:21.615045 | orchestrator | 2026-02-02 00:50:21 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:21.615127 | orchestrator | 2026-02-02 00:50:21 | INFO  | Task 94cf3f77-ed8f-48b9-a1e3-608f25d54905 is in state STARTED 2026-02-02 00:50:21.615142 | orchestrator | 2026-02-02 00:50:21 | INFO  | Task 7beb5736-4adb-4e03-b8ec-9e8c05df6279 is in state STARTED 2026-02-02 00:50:21.616134 | orchestrator | 2026-02-02 00:50:21 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:21.618109 | orchestrator | 2026-02-02 00:50:21 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:21.623467 | orchestrator | 2026-02-02 00:50:21 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:21.623524 | orchestrator | 2026-02-02 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:24.725843 | orchestrator | 2026-02-02 00:50:24 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:24.725938 | orchestrator | 2026-02-02 00:50:24 | INFO  | Task 94cf3f77-ed8f-48b9-a1e3-608f25d54905 is in state STARTED 2026-02-02 00:50:24.725952 | orchestrator | 2026-02-02 00:50:24 | INFO  | Task 7beb5736-4adb-4e03-b8ec-9e8c05df6279 is in state STARTED 2026-02-02 00:50:24.725962 | orchestrator | 2026-02-02 00:50:24 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:24.725972 | orchestrator | 2026-02-02 00:50:24 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:24.725982 | orchestrator | 2026-02-02 00:50:24 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:24.725993 | orchestrator | 2026-02-02 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:27.739164 | orchestrator | 2026-02-02 00:50:27 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:27.745782 | orchestrator | 2026-02-02 00:50:27 | INFO  | Task 94cf3f77-ed8f-48b9-a1e3-608f25d54905 is in state STARTED 2026-02-02 00:50:27.745843 | orchestrator | 2026-02-02 00:50:27 | INFO  | Task 7beb5736-4adb-4e03-b8ec-9e8c05df6279 is in state STARTED 2026-02-02 00:50:27.746257 | orchestrator | 2026-02-02 00:50:27 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:27.748837 | orchestrator | 2026-02-02 00:50:27 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:27.753269 | orchestrator | 2026-02-02 00:50:27 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:27.753304 | orchestrator | 2026-02-02 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:30.802792 | orchestrator | 2026-02-02 00:50:30 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:30.802859 | orchestrator | 2026-02-02 00:50:30 | INFO  | Task 94cf3f77-ed8f-48b9-a1e3-608f25d54905 is in state STARTED 2026-02-02 00:50:30.805109 | orchestrator | 2026-02-02 00:50:30 | INFO  | Task 7beb5736-4adb-4e03-b8ec-9e8c05df6279 is in state STARTED 2026-02-02 00:50:30.805143 | orchestrator | 2026-02-02 00:50:30 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:30.805148 | orchestrator | 2026-02-02 00:50:30 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:30.806111 | orchestrator | 2026-02-02 00:50:30 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:30.806147 | orchestrator | 2026-02-02 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:34.024954 | orchestrator | 2026-02-02 00:50:34 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:34.026371 | orchestrator | 2026-02-02 00:50:34 | INFO  | Task 94cf3f77-ed8f-48b9-a1e3-608f25d54905 is in state STARTED 2026-02-02 00:50:34.031334 | orchestrator | 2026-02-02 00:50:34 | INFO  | Task 7beb5736-4adb-4e03-b8ec-9e8c05df6279 is in state STARTED 2026-02-02 00:50:34.031427 | orchestrator | 2026-02-02 00:50:34 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:34.032113 | orchestrator | 2026-02-02 00:50:34 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:34.034313 | orchestrator | 2026-02-02 00:50:34 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:34.034365 | orchestrator | 2026-02-02 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:37.091392 | orchestrator | 2026-02-02 00:50:37 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:37.092203 | orchestrator | 2026-02-02 00:50:37 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:50:37.093230 | orchestrator | 2026-02-02 00:50:37 | INFO  | Task 94cf3f77-ed8f-48b9-a1e3-608f25d54905 is in state SUCCESS 2026-02-02 00:50:37.093508 | orchestrator | 2026-02-02 00:50:37.093540 | orchestrator | 2026-02-02 00:50:37.093550 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 00:50:37.093560 | orchestrator | 2026-02-02 00:50:37.093568 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 00:50:37.093577 | orchestrator | Monday 02 February 2026 00:50:16 +0000 (0:00:00.366) 0:00:00.366 ******* 2026-02-02 00:50:37.093585 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:50:37.093595 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:50:37.093603 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:50:37.093611 | orchestrator | 2026-02-02 00:50:37.093619 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 00:50:37.093628 | orchestrator | Monday 02 February 2026 00:50:17 +0000 (0:00:00.454) 0:00:00.820 ******* 2026-02-02 00:50:37.093636 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-02 00:50:37.093718 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-02 00:50:37.093728 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-02 00:50:37.093779 | orchestrator | 2026-02-02 00:50:37.093789 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-02 00:50:37.093797 | orchestrator | 2026-02-02 00:50:37.093806 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-02 00:50:37.093814 | orchestrator | Monday 02 February 2026 00:50:17 +0000 (0:00:00.440) 0:00:01.261 ******* 2026-02-02 00:50:37.093823 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:50:37.093832 | orchestrator | 2026-02-02 00:50:37.093840 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-02 00:50:37.093848 | orchestrator | Monday 02 February 2026 00:50:18 +0000 (0:00:00.509) 0:00:01.770 ******* 2026-02-02 00:50:37.093857 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-02 00:50:37.093865 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-02 00:50:37.093873 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-02 00:50:37.093881 | orchestrator | 2026-02-02 00:50:37.093889 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-02 00:50:37.093897 | orchestrator | Monday 02 February 2026 00:50:18 +0000 (0:00:00.752) 0:00:02.523 ******* 2026-02-02 00:50:37.093905 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-02 00:50:37.093914 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-02 00:50:37.093923 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-02 00:50:37.093931 | orchestrator | 2026-02-02 00:50:37.093939 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-02 00:50:37.093947 | orchestrator | Monday 02 February 2026 00:50:21 +0000 (0:00:02.646) 0:00:05.169 ******* 2026-02-02 00:50:37.093976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 00:50:37.094012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 00:50:37.094086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 00:50:37.094096 | orchestrator | 2026-02-02 00:50:37.094105 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-02 00:50:37.094154 | orchestrator | Monday 02 February 2026 00:50:22 +0000 (0:00:01.399) 0:00:06.568 ******* 2026-02-02 00:50:37.094166 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 00:50:37.094176 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:50:37.094186 | orchestrator | } 2026-02-02 00:50:37.094196 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 00:50:37.094206 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:50:37.094215 | orchestrator | } 2026-02-02 00:50:37.094224 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 00:50:37.094234 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:50:37.094243 | orchestrator | } 2026-02-02 00:50:37.094252 | orchestrator | 2026-02-02 00:50:37.094261 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 00:50:37.094271 | orchestrator | Monday 02 February 2026 00:50:23 +0000 (0:00:00.545) 0:00:07.114 ******* 2026-02-02 00:50:37.094282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 00:50:37.094292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 00:50:37.094311 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:50:37.094321 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:50:37.094337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 00:50:37.094347 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:50:37.094356 | orchestrator | 2026-02-02 00:50:37.094365 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-02 00:50:37.094374 | orchestrator | Monday 02 February 2026 00:50:25 +0000 (0:00:02.502) 0:00:09.617 ******* 2026-02-02 00:50:37.094386 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:50:37.094400 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:50:37.094412 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:50:37.094422 | orchestrator | 2026-02-02 00:50:37.094432 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:50:37.094442 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:50:37.094456 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:50:37.094470 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:50:37.094483 | orchestrator | 2026-02-02 00:50:37.094496 | orchestrator | 2026-02-02 00:50:37.094511 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:50:37.094526 | orchestrator | Monday 02 February 2026 00:50:34 +0000 (0:00:08.250) 0:00:17.867 ******* 2026-02-02 00:50:37.094549 | orchestrator | =============================================================================== 2026-02-02 00:50:37.094563 | orchestrator | memcached : Restart memcached container --------------------------------- 8.25s 2026-02-02 00:50:37.094577 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.64s 2026-02-02 00:50:37.094586 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.50s 2026-02-02 00:50:37.094594 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.40s 2026-02-02 00:50:37.094602 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.76s 2026-02-02 00:50:37.094610 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.55s 2026-02-02 00:50:37.094618 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.51s 2026-02-02 00:50:37.094626 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2026-02-02 00:50:37.094634 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-02-02 00:50:37.094717 | orchestrator | 2026-02-02 00:50:37 | INFO  | Task 7beb5736-4adb-4e03-b8ec-9e8c05df6279 is in state STARTED 2026-02-02 00:50:37.096196 | orchestrator | 2026-02-02 00:50:37 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:37.102562 | orchestrator | 2026-02-02 00:50:37 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:37.103627 | orchestrator | 2026-02-02 00:50:37 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:37.103667 | orchestrator | 2026-02-02 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:40.171954 | orchestrator | 2026-02-02 00:50:40 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:40.173176 | orchestrator | 2026-02-02 00:50:40 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:50:40.174264 | orchestrator | 2026-02-02 00:50:40 | INFO  | Task 7beb5736-4adb-4e03-b8ec-9e8c05df6279 is in state STARTED 2026-02-02 00:50:40.176924 | orchestrator | 2026-02-02 00:50:40 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:40.178413 | orchestrator | 2026-02-02 00:50:40 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:40.179538 | orchestrator | 2026-02-02 00:50:40 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:40.179548 | orchestrator | 2026-02-02 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:43.319702 | orchestrator | 2026-02-02 00:50:43 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:43.319831 | orchestrator | 2026-02-02 00:50:43 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:50:43.320798 | orchestrator | 2026-02-02 00:50:43 | INFO  | Task 7beb5736-4adb-4e03-b8ec-9e8c05df6279 is in state STARTED 2026-02-02 00:50:43.321588 | orchestrator | 2026-02-02 00:50:43 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:43.322328 | orchestrator | 2026-02-02 00:50:43 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:43.323239 | orchestrator | 2026-02-02 00:50:43 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:43.323289 | orchestrator | 2026-02-02 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:46.360905 | orchestrator | 2026-02-02 00:50:46 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:46.361780 | orchestrator | 2026-02-02 00:50:46 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:50:46.362774 | orchestrator | 2026-02-02 00:50:46 | INFO  | Task 7beb5736-4adb-4e03-b8ec-9e8c05df6279 is in state STARTED 2026-02-02 00:50:46.365036 | orchestrator | 2026-02-02 00:50:46 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:46.365982 | orchestrator | 2026-02-02 00:50:46 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:46.367391 | orchestrator | 2026-02-02 00:50:46 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:46.367439 | orchestrator | 2026-02-02 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:49.460137 | orchestrator | 2026-02-02 00:50:49 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:49.460225 | orchestrator | 2026-02-02 00:50:49 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:50:49.460235 | orchestrator | 2026-02-02 00:50:49 | INFO  | Task 7beb5736-4adb-4e03-b8ec-9e8c05df6279 is in state STARTED 2026-02-02 00:50:49.462067 | orchestrator | 2026-02-02 00:50:49 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:49.462705 | orchestrator | 2026-02-02 00:50:49 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:49.463879 | orchestrator | 2026-02-02 00:50:49 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:49.463907 | orchestrator | 2026-02-02 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:52.514955 | orchestrator | 2026-02-02 00:50:52 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:52.516665 | orchestrator | 2026-02-02 00:50:52 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:50:52.519707 | orchestrator | 2026-02-02 00:50:52 | INFO  | Task 7beb5736-4adb-4e03-b8ec-9e8c05df6279 is in state SUCCESS 2026-02-02 00:50:52.521539 | orchestrator | 2026-02-02 00:50:52.521574 | orchestrator | 2026-02-02 00:50:52.521580 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 00:50:52.521588 | orchestrator | 2026-02-02 00:50:52.521593 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 00:50:52.521597 | orchestrator | Monday 02 February 2026 00:50:16 +0000 (0:00:00.345) 0:00:00.346 ******* 2026-02-02 00:50:52.521601 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:50:52.521607 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:50:52.521611 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:50:52.521615 | orchestrator | 2026-02-02 00:50:52.521619 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 00:50:52.521623 | orchestrator | Monday 02 February 2026 00:50:17 +0000 (0:00:00.370) 0:00:00.716 ******* 2026-02-02 00:50:52.521627 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-02 00:50:52.521631 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-02 00:50:52.521635 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-02 00:50:52.521638 | orchestrator | 2026-02-02 00:50:52.521642 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-02 00:50:52.521646 | orchestrator | 2026-02-02 00:50:52.521650 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-02 00:50:52.521654 | orchestrator | Monday 02 February 2026 00:50:17 +0000 (0:00:00.647) 0:00:01.364 ******* 2026-02-02 00:50:52.521657 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:50:52.521662 | orchestrator | 2026-02-02 00:50:52.521666 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-02 00:50:52.521669 | orchestrator | Monday 02 February 2026 00:50:18 +0000 (0:00:00.824) 0:00:02.189 ******* 2026-02-02 00:50:52.521680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521757 | orchestrator | 2026-02-02 00:50:52.521761 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-02 00:50:52.521765 | orchestrator | Monday 02 February 2026 00:50:20 +0000 (0:00:01.499) 0:00:03.688 ******* 2026-02-02 00:50:52.521769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521806 | orchestrator | 2026-02-02 00:50:52.521810 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-02 00:50:52.521814 | orchestrator | Monday 02 February 2026 00:50:23 +0000 (0:00:03.145) 0:00:06.834 ******* 2026-02-02 00:50:52.521817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521859 | orchestrator | 2026-02-02 00:50:52.521865 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-02 00:50:52.521871 | orchestrator | Monday 02 February 2026 00:50:26 +0000 (0:00:03.689) 0:00:10.523 ******* 2026-02-02 00:50:52.521878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 00:50:52.521928 | orchestrator | 2026-02-02 00:50:52.521934 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-02 00:50:52.521940 | orchestrator | Monday 02 February 2026 00:50:29 +0000 (0:00:02.879) 0:00:13.402 ******* 2026-02-02 00:50:52.521946 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 00:50:52.521953 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:50:52.521959 | orchestrator | } 2026-02-02 00:50:52.521965 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 00:50:52.521971 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:50:52.521977 | orchestrator | } 2026-02-02 00:50:52.521984 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 00:50:52.521989 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:50:52.521994 | orchestrator | } 2026-02-02 00:50:52.522001 | orchestrator | 2026-02-02 00:50:52.522006 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 00:50:52.522012 | orchestrator | Monday 02 February 2026 00:50:30 +0000 (0:00:00.817) 0:00:14.219 ******* 2026-02-02 00:50:52.522082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-02 00:50:52.522100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-02 00:50:52.522107 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:50:52.522113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-02 00:50:52.522119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-02 00:50:52.522125 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:50:52.522131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-02 00:50:52.522144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-02 00:50:52.522151 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:50:52.522157 | orchestrator | 2026-02-02 00:50:52.522163 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-02 00:50:52.522169 | orchestrator | Monday 02 February 2026 00:50:32 +0000 (0:00:01.980) 0:00:16.200 ******* 2026-02-02 00:50:52.522175 | orchestrator | 2026-02-02 00:50:52.522181 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-02 00:50:52.522187 | orchestrator | Monday 02 February 2026 00:50:32 +0000 (0:00:00.090) 0:00:16.290 ******* 2026-02-02 00:50:52.522199 | orchestrator | 2026-02-02 00:50:52.522205 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-02 00:50:52.522212 | orchestrator | Monday 02 February 2026 00:50:32 +0000 (0:00:00.079) 0:00:16.370 ******* 2026-02-02 00:50:52.522219 | orchestrator | 2026-02-02 00:50:52.522225 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-02 00:50:52.522231 | orchestrator | Monday 02 February 2026 00:50:32 +0000 (0:00:00.087) 0:00:16.457 ******* 2026-02-02 00:50:52.522238 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:50:52.522244 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:50:52.522250 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:50:52.522257 | orchestrator | 2026-02-02 00:50:52.522264 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-02 00:50:52.522270 | orchestrator | Monday 02 February 2026 00:50:41 +0000 (0:00:08.232) 0:00:24.690 ******* 2026-02-02 00:50:52.522277 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:50:52.522287 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:50:52.522293 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:50:52.522299 | orchestrator | 2026-02-02 00:50:52.522305 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:50:52.522312 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:50:52.522320 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:50:52.522326 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:50:52.522363 | orchestrator | 2026-02-02 00:50:52.522367 | orchestrator | 2026-02-02 00:50:52.522370 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:50:52.522374 | orchestrator | Monday 02 February 2026 00:50:50 +0000 (0:00:09.171) 0:00:33.861 ******* 2026-02-02 00:50:52.522378 | orchestrator | =============================================================================== 2026-02-02 00:50:52.522382 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.17s 2026-02-02 00:50:52.522386 | orchestrator | redis : Restart redis container ----------------------------------------- 8.23s 2026-02-02 00:50:52.522390 | orchestrator | redis : Copying over redis config files --------------------------------- 3.69s 2026-02-02 00:50:52.522393 | orchestrator | redis : Copying over default config.json files -------------------------- 3.15s 2026-02-02 00:50:52.522397 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.88s 2026-02-02 00:50:52.522401 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.98s 2026-02-02 00:50:52.522405 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.50s 2026-02-02 00:50:52.522409 | orchestrator | redis : include_tasks --------------------------------------------------- 0.82s 2026-02-02 00:50:52.522413 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.82s 2026-02-02 00:50:52.522417 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-02-02 00:50:52.522421 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-02-02 00:50:52.522424 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.26s 2026-02-02 00:50:52.522493 | orchestrator | 2026-02-02 00:50:52 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:52.525465 | orchestrator | 2026-02-02 00:50:52 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:52.528071 | orchestrator | 2026-02-02 00:50:52 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:52.528372 | orchestrator | 2026-02-02 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:55.593439 | orchestrator | 2026-02-02 00:50:55 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:55.595326 | orchestrator | 2026-02-02 00:50:55 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:50:55.595568 | orchestrator | 2026-02-02 00:50:55 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:55.596808 | orchestrator | 2026-02-02 00:50:55 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:55.597836 | orchestrator | 2026-02-02 00:50:55 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:55.599409 | orchestrator | 2026-02-02 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:50:58.687225 | orchestrator | 2026-02-02 00:50:58 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:50:58.688165 | orchestrator | 2026-02-02 00:50:58 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:50:58.689426 | orchestrator | 2026-02-02 00:50:58 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:50:58.690649 | orchestrator | 2026-02-02 00:50:58 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:50:58.692501 | orchestrator | 2026-02-02 00:50:58 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:50:58.692533 | orchestrator | 2026-02-02 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:01.743003 | orchestrator | 2026-02-02 00:51:01 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:51:01.743097 | orchestrator | 2026-02-02 00:51:01 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:01.743119 | orchestrator | 2026-02-02 00:51:01 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:01.743155 | orchestrator | 2026-02-02 00:51:01 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:01.743170 | orchestrator | 2026-02-02 00:51:01 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:01.743187 | orchestrator | 2026-02-02 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:04.781820 | orchestrator | 2026-02-02 00:51:04 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:51:04.785622 | orchestrator | 2026-02-02 00:51:04 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:04.789090 | orchestrator | 2026-02-02 00:51:04 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:04.795765 | orchestrator | 2026-02-02 00:51:04 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:04.798074 | orchestrator | 2026-02-02 00:51:04 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:04.798217 | orchestrator | 2026-02-02 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:07.837959 | orchestrator | 2026-02-02 00:51:07 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:51:07.838545 | orchestrator | 2026-02-02 00:51:07 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:07.840203 | orchestrator | 2026-02-02 00:51:07 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:07.841510 | orchestrator | 2026-02-02 00:51:07 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:07.842971 | orchestrator | 2026-02-02 00:51:07 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:07.843088 | orchestrator | 2026-02-02 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:10.893637 | orchestrator | 2026-02-02 00:51:10 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:51:10.894968 | orchestrator | 2026-02-02 00:51:10 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:10.896515 | orchestrator | 2026-02-02 00:51:10 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:10.898121 | orchestrator | 2026-02-02 00:51:10 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:10.899767 | orchestrator | 2026-02-02 00:51:10 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:10.899797 | orchestrator | 2026-02-02 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:13.941472 | orchestrator | 2026-02-02 00:51:13 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:51:13.941754 | orchestrator | 2026-02-02 00:51:13 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:13.943362 | orchestrator | 2026-02-02 00:51:13 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:13.946247 | orchestrator | 2026-02-02 00:51:13 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:13.946829 | orchestrator | 2026-02-02 00:51:13 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:13.946956 | orchestrator | 2026-02-02 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:16.986294 | orchestrator | 2026-02-02 00:51:16 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:51:16.989424 | orchestrator | 2026-02-02 00:51:16 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:16.993029 | orchestrator | 2026-02-02 00:51:16 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:16.995165 | orchestrator | 2026-02-02 00:51:16 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:16.997449 | orchestrator | 2026-02-02 00:51:16 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:16.997545 | orchestrator | 2026-02-02 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:20.127753 | orchestrator | 2026-02-02 00:51:20 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:51:20.131254 | orchestrator | 2026-02-02 00:51:20 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:20.131755 | orchestrator | 2026-02-02 00:51:20 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:20.132718 | orchestrator | 2026-02-02 00:51:20 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:20.133404 | orchestrator | 2026-02-02 00:51:20 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:20.133458 | orchestrator | 2026-02-02 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:23.194859 | orchestrator | 2026-02-02 00:51:23 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:51:23.195495 | orchestrator | 2026-02-02 00:51:23 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:23.197001 | orchestrator | 2026-02-02 00:51:23 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:23.197499 | orchestrator | 2026-02-02 00:51:23 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:23.198202 | orchestrator | 2026-02-02 00:51:23 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:23.198378 | orchestrator | 2026-02-02 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:26.243801 | orchestrator | 2026-02-02 00:51:26 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:51:26.244707 | orchestrator | 2026-02-02 00:51:26 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:26.246910 | orchestrator | 2026-02-02 00:51:26 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:26.248684 | orchestrator | 2026-02-02 00:51:26 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:26.250762 | orchestrator | 2026-02-02 00:51:26 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:26.250836 | orchestrator | 2026-02-02 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:29.285244 | orchestrator | 2026-02-02 00:51:29 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state STARTED 2026-02-02 00:51:29.285549 | orchestrator | 2026-02-02 00:51:29 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:29.286561 | orchestrator | 2026-02-02 00:51:29 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:29.287647 | orchestrator | 2026-02-02 00:51:29 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:29.289887 | orchestrator | 2026-02-02 00:51:29 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:29.289965 | orchestrator | 2026-02-02 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:32.336612 | orchestrator | 2026-02-02 00:51:32 | INFO  | Task f8048d4f-3b97-40b5-a9cd-c30fd6cb8ada is in state SUCCESS 2026-02-02 00:51:32.338520 | orchestrator | 2026-02-02 00:51:32.338652 | orchestrator | 2026-02-02 00:51:32.338818 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 00:51:32.338878 | orchestrator | 2026-02-02 00:51:32.338897 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 00:51:32.338913 | orchestrator | Monday 02 February 2026 00:50:15 +0000 (0:00:00.260) 0:00:00.260 ******* 2026-02-02 00:51:32.338928 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:51:32.338945 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:51:32.338963 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:51:32.338981 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:51:32.338997 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:51:32.339010 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:51:32.339022 | orchestrator | 2026-02-02 00:51:32.339033 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 00:51:32.339044 | orchestrator | Monday 02 February 2026 00:50:16 +0000 (0:00:00.820) 0:00:01.081 ******* 2026-02-02 00:51:32.339055 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 00:51:32.339067 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 00:51:32.339078 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 00:51:32.339090 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 00:51:32.339101 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 00:51:32.339131 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 00:51:32.339142 | orchestrator | 2026-02-02 00:51:32.339153 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-02 00:51:32.339165 | orchestrator | 2026-02-02 00:51:32.339176 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-02 00:51:32.339187 | orchestrator | Monday 02 February 2026 00:50:17 +0000 (0:00:00.783) 0:00:01.865 ******* 2026-02-02 00:51:32.339198 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:51:32.339210 | orchestrator | 2026-02-02 00:51:32.339228 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-02 00:51:32.339239 | orchestrator | Monday 02 February 2026 00:50:19 +0000 (0:00:02.094) 0:00:03.960 ******* 2026-02-02 00:51:32.339250 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-02 00:51:32.339261 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-02 00:51:32.339272 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-02 00:51:32.339283 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-02 00:51:32.339294 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-02 00:51:32.339305 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-02 00:51:32.339316 | orchestrator | 2026-02-02 00:51:32.339327 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-02 00:51:32.339338 | orchestrator | Monday 02 February 2026 00:50:21 +0000 (0:00:02.013) 0:00:05.973 ******* 2026-02-02 00:51:32.339349 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-02 00:51:32.339397 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-02 00:51:32.339408 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-02 00:51:32.339418 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-02 00:51:32.339428 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-02 00:51:32.339437 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-02 00:51:32.339447 | orchestrator | 2026-02-02 00:51:32.339457 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-02 00:51:32.339467 | orchestrator | Monday 02 February 2026 00:50:24 +0000 (0:00:02.469) 0:00:08.443 ******* 2026-02-02 00:51:32.339477 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-02 00:51:32.339487 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:51:32.339497 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-02 00:51:32.339507 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-02 00:51:32.339517 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:51:32.339531 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-02 00:51:32.339549 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:51:32.339565 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-02 00:51:32.339584 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:51:32.339664 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:51:32.339713 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-02 00:51:32.339724 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:51:32.339733 | orchestrator | 2026-02-02 00:51:32.339743 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-02 00:51:32.339753 | orchestrator | Monday 02 February 2026 00:50:26 +0000 (0:00:02.053) 0:00:10.497 ******* 2026-02-02 00:51:32.339763 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:51:32.339773 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:51:32.339783 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:51:32.339793 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:51:32.339802 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:51:32.339812 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:51:32.339836 | orchestrator | 2026-02-02 00:51:32.339846 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-02 00:51:32.339856 | orchestrator | Monday 02 February 2026 00:50:28 +0000 (0:00:02.016) 0:00:12.513 ******* 2026-02-02 00:51:32.339888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.339905 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.339922 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.339934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.339944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.339955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.339980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.339991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340085 | orchestrator | 2026-02-02 00:51:32.340095 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-02 00:51:32.340105 | orchestrator | Monday 02 February 2026 00:50:31 +0000 (0:00:02.947) 0:00:15.460 ******* 2026-02-02 00:51:32.340115 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340126 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340137 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340158 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340234 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340289 | orchestrator | 2026-02-02 00:51:32.340300 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-02 00:51:32.340344 | orchestrator | Monday 02 February 2026 00:50:35 +0000 (0:00:04.803) 0:00:20.264 ******* 2026-02-02 00:51:32.340356 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:51:32.340366 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:51:32.340376 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:51:32.340386 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:51:32.340395 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:51:32.340405 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:51:32.340415 | orchestrator | 2026-02-02 00:51:32.340425 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-02 00:51:32.340434 | orchestrator | Monday 02 February 2026 00:50:37 +0000 (0:00:01.940) 0:00:22.207 ******* 2026-02-02 00:51:32.340449 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340470 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340488 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340529 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340555 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 00:51:32.340635 | orchestrator | 2026-02-02 00:51:32.340653 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-02 00:51:32.340669 | orchestrator | Monday 02 February 2026 00:50:42 +0000 (0:00:04.444) 0:00:26.652 ******* 2026-02-02 00:51:32.340705 | orchestrator | changed: [testbed-node-3] => { 2026-02-02 00:51:32.340716 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:51:32.340726 | orchestrator | } 2026-02-02 00:51:32.340736 | orchestrator | changed: [testbed-node-4] => { 2026-02-02 00:51:32.340751 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:51:32.340761 | orchestrator | } 2026-02-02 00:51:32.340771 | orchestrator | changed: [testbed-node-5] => { 2026-02-02 00:51:32.340781 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:51:32.340790 | orchestrator | } 2026-02-02 00:51:32.340800 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 00:51:32.340809 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:51:32.340826 | orchestrator | } 2026-02-02 00:51:32.340835 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 00:51:32.340845 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:51:32.340854 | orchestrator | } 2026-02-02 00:51:32.340864 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 00:51:32.340874 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:51:32.340884 | orchestrator | } 2026-02-02 00:51:32.340893 | orchestrator | 2026-02-02 00:51:32.340903 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 00:51:32.340913 | orchestrator | Monday 02 February 2026 00:50:43 +0000 (0:00:01.173) 0:00:27.825 ******* 2026-02-02 00:51:32.340923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-02 00:51:32.340934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-02 00:51:32.340944 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:51:32.340960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-02 00:51:32.340972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-02 00:51:32.340982 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:51:32.340999 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-02 00:51:32.341015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-02 00:51:32.341026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-02 00:51:32.341035 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:51:32.341045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-02 00:51:32.341056 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:51:32.341072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-02 00:51:32.341082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-02 00:51:32.341100 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:51:32.341114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-02 00:51:32.341125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-02 00:51:32.341135 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:51:32.341145 | orchestrator | 2026-02-02 00:51:32.341155 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 00:51:32.341165 | orchestrator | Monday 02 February 2026 00:50:45 +0000 (0:00:02.267) 0:00:30.092 ******* 2026-02-02 00:51:32.341175 | orchestrator | 2026-02-02 00:51:32.341184 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 00:51:32.341194 | orchestrator | Monday 02 February 2026 00:50:45 +0000 (0:00:00.129) 0:00:30.222 ******* 2026-02-02 00:51:32.341204 | orchestrator | 2026-02-02 00:51:32.341214 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 00:51:32.341224 | orchestrator | Monday 02 February 2026 00:50:46 +0000 (0:00:00.140) 0:00:30.363 ******* 2026-02-02 00:51:32.341234 | orchestrator | 2026-02-02 00:51:32.341244 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 00:51:32.341254 | orchestrator | Monday 02 February 2026 00:50:46 +0000 (0:00:00.133) 0:00:30.496 ******* 2026-02-02 00:51:32.341263 | orchestrator | 2026-02-02 00:51:32.341273 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 00:51:32.341283 | orchestrator | Monday 02 February 2026 00:50:46 +0000 (0:00:00.308) 0:00:30.805 ******* 2026-02-02 00:51:32.341293 | orchestrator | 2026-02-02 00:51:32.341302 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 00:51:32.341312 | orchestrator | Monday 02 February 2026 00:50:46 +0000 (0:00:00.182) 0:00:30.987 ******* 2026-02-02 00:51:32.341322 | orchestrator | 2026-02-02 00:51:32.341332 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-02 00:51:32.341342 | orchestrator | Monday 02 February 2026 00:50:46 +0000 (0:00:00.156) 0:00:31.143 ******* 2026-02-02 00:51:32.341352 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:51:32.341364 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:51:32.341381 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:51:32.341391 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:51:32.341401 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:51:32.341410 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:51:32.341420 | orchestrator | 2026-02-02 00:51:32.341430 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-02 00:51:32.341445 | orchestrator | Monday 02 February 2026 00:50:55 +0000 (0:00:08.875) 0:00:40.019 ******* 2026-02-02 00:51:32.341462 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:51:32.341472 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:51:32.341482 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:51:32.341492 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:51:32.341501 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:51:32.341511 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:51:32.341521 | orchestrator | 2026-02-02 00:51:32.341531 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-02 00:51:32.341541 | orchestrator | Monday 02 February 2026 00:50:57 +0000 (0:00:01.630) 0:00:41.649 ******* 2026-02-02 00:51:32.341550 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:51:32.341560 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:51:32.341570 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:51:32.341580 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:51:32.341589 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:51:32.341601 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:51:32.341618 | orchestrator | 2026-02-02 00:51:32.341635 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-02 00:51:32.341651 | orchestrator | Monday 02 February 2026 00:51:07 +0000 (0:00:10.554) 0:00:52.204 ******* 2026-02-02 00:51:32.341669 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-02 00:51:32.341747 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-02 00:51:32.341759 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-02 00:51:32.341769 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-02 00:51:32.341779 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-02 00:51:32.341789 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-02 00:51:32.341804 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-02 00:51:32.341814 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-02 00:51:32.341824 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-02 00:51:32.341834 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-02 00:51:32.341844 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-02 00:51:32.341854 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 00:51:32.341863 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-02 00:51:32.341873 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 00:51:32.341883 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 00:51:32.341893 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 00:51:32.341902 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 00:51:32.341912 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 00:51:32.341922 | orchestrator | 2026-02-02 00:51:32.341932 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-02 00:51:32.341949 | orchestrator | Monday 02 February 2026 00:51:15 +0000 (0:00:07.870) 0:01:00.074 ******* 2026-02-02 00:51:32.341959 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-02 00:51:32.341969 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:51:32.341978 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-02 00:51:32.341988 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:51:32.341998 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-02 00:51:32.342007 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:51:32.342061 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-02 00:51:32.342074 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-02 00:51:32.342084 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-02 00:51:32.342095 | orchestrator | 2026-02-02 00:51:32.342104 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-02 00:51:32.342114 | orchestrator | Monday 02 February 2026 00:51:18 +0000 (0:00:02.541) 0:01:02.616 ******* 2026-02-02 00:51:32.342124 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-02 00:51:32.342134 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:51:32.342143 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-02 00:51:32.342158 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:51:32.342172 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-02 00:51:32.342183 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:51:32.342194 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-02 00:51:32.342214 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-02 00:51:32.342226 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-02 00:51:32.342237 | orchestrator | 2026-02-02 00:51:32.342249 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-02 00:51:32.342260 | orchestrator | Monday 02 February 2026 00:51:22 +0000 (0:00:04.136) 0:01:06.753 ******* 2026-02-02 00:51:32.342271 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:51:32.342282 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:51:32.342293 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:51:32.342304 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:51:32.342314 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:51:32.342325 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:51:32.342336 | orchestrator | 2026-02-02 00:51:32.342347 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:51:32.342359 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 00:51:32.342370 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 00:51:32.342381 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 00:51:32.342392 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 00:51:32.342403 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 00:51:32.342420 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 00:51:32.342431 | orchestrator | 2026-02-02 00:51:32.342442 | orchestrator | 2026-02-02 00:51:32.342453 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:51:32.342465 | orchestrator | Monday 02 February 2026 00:51:29 +0000 (0:00:07.309) 0:01:14.062 ******* 2026-02-02 00:51:32.342475 | orchestrator | =============================================================================== 2026-02-02 00:51:32.342494 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.86s 2026-02-02 00:51:32.342505 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.88s 2026-02-02 00:51:32.342516 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.87s 2026-02-02 00:51:32.342527 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.80s 2026-02-02 00:51:32.342538 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 4.44s 2026-02-02 00:51:32.342549 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.14s 2026-02-02 00:51:32.342560 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.95s 2026-02-02 00:51:32.342571 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.54s 2026-02-02 00:51:32.342582 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.47s 2026-02-02 00:51:32.342593 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.27s 2026-02-02 00:51:32.342604 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.09s 2026-02-02 00:51:32.342615 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.06s 2026-02-02 00:51:32.342626 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.01s 2026-02-02 00:51:32.342637 | orchestrator | module-load : Load modules ---------------------------------------------- 2.01s 2026-02-02 00:51:32.342648 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.94s 2026-02-02 00:51:32.342666 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.63s 2026-02-02 00:51:32.342754 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.17s 2026-02-02 00:51:32.342778 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.05s 2026-02-02 00:51:32.342790 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.82s 2026-02-02 00:51:32.342801 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2026-02-02 00:51:32.342812 | orchestrator | 2026-02-02 00:51:32 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:32.342824 | orchestrator | 2026-02-02 00:51:32 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:32.342835 | orchestrator | 2026-02-02 00:51:32 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:32.342846 | orchestrator | 2026-02-02 00:51:32 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:32.343477 | orchestrator | 2026-02-02 00:51:32 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:51:32.343528 | orchestrator | 2026-02-02 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:35.370318 | orchestrator | 2026-02-02 00:51:35 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:35.373819 | orchestrator | 2026-02-02 00:51:35 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:35.380086 | orchestrator | 2026-02-02 00:51:35 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:35.380179 | orchestrator | 2026-02-02 00:51:35 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:35.380190 | orchestrator | 2026-02-02 00:51:35 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:51:35.380197 | orchestrator | 2026-02-02 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:38.423640 | orchestrator | 2026-02-02 00:51:38 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:38.425135 | orchestrator | 2026-02-02 00:51:38 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:38.426716 | orchestrator | 2026-02-02 00:51:38 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:38.428322 | orchestrator | 2026-02-02 00:51:38 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:38.431018 | orchestrator | 2026-02-02 00:51:38 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:51:38.431086 | orchestrator | 2026-02-02 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:41.461934 | orchestrator | 2026-02-02 00:51:41 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:41.462307 | orchestrator | 2026-02-02 00:51:41 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:41.463105 | orchestrator | 2026-02-02 00:51:41 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:41.463950 | orchestrator | 2026-02-02 00:51:41 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:41.464663 | orchestrator | 2026-02-02 00:51:41 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:51:41.464851 | orchestrator | 2026-02-02 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:44.660129 | orchestrator | 2026-02-02 00:51:44 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:44.660630 | orchestrator | 2026-02-02 00:51:44 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:44.661536 | orchestrator | 2026-02-02 00:51:44 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:44.662225 | orchestrator | 2026-02-02 00:51:44 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:44.662886 | orchestrator | 2026-02-02 00:51:44 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:51:44.663020 | orchestrator | 2026-02-02 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:47.711146 | orchestrator | 2026-02-02 00:51:47 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:47.714209 | orchestrator | 2026-02-02 00:51:47 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:47.715568 | orchestrator | 2026-02-02 00:51:47 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:47.716191 | orchestrator | 2026-02-02 00:51:47 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:47.717523 | orchestrator | 2026-02-02 00:51:47 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:51:47.717557 | orchestrator | 2026-02-02 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:50.763220 | orchestrator | 2026-02-02 00:51:50 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:50.764118 | orchestrator | 2026-02-02 00:51:50 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:50.765142 | orchestrator | 2026-02-02 00:51:50 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:50.766495 | orchestrator | 2026-02-02 00:51:50 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:50.767423 | orchestrator | 2026-02-02 00:51:50 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:51:50.767487 | orchestrator | 2026-02-02 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:53.801709 | orchestrator | 2026-02-02 00:51:53 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:53.804435 | orchestrator | 2026-02-02 00:51:53 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:53.811051 | orchestrator | 2026-02-02 00:51:53 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:53.811098 | orchestrator | 2026-02-02 00:51:53 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:53.811522 | orchestrator | 2026-02-02 00:51:53 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:51:53.811597 | orchestrator | 2026-02-02 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:51:56.920485 | orchestrator | 2026-02-02 00:51:56 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:51:56.920551 | orchestrator | 2026-02-02 00:51:56 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:51:56.920563 | orchestrator | 2026-02-02 00:51:56 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:51:56.922897 | orchestrator | 2026-02-02 00:51:56 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:51:56.922937 | orchestrator | 2026-02-02 00:51:56 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:51:56.922950 | orchestrator | 2026-02-02 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:00.055367 | orchestrator | 2026-02-02 00:52:00 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:00.055578 | orchestrator | 2026-02-02 00:52:00 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:52:00.056834 | orchestrator | 2026-02-02 00:52:00 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:00.057199 | orchestrator | 2026-02-02 00:52:00 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:00.057919 | orchestrator | 2026-02-02 00:52:00 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:00.057935 | orchestrator | 2026-02-02 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:03.107441 | orchestrator | 2026-02-02 00:52:03 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:03.107889 | orchestrator | 2026-02-02 00:52:03 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:52:03.108734 | orchestrator | 2026-02-02 00:52:03 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:03.109513 | orchestrator | 2026-02-02 00:52:03 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:03.115193 | orchestrator | 2026-02-02 00:52:03 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:03.115258 | orchestrator | 2026-02-02 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:06.149388 | orchestrator | 2026-02-02 00:52:06 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:06.149984 | orchestrator | 2026-02-02 00:52:06 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:52:06.151214 | orchestrator | 2026-02-02 00:52:06 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:06.154137 | orchestrator | 2026-02-02 00:52:06 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:06.154825 | orchestrator | 2026-02-02 00:52:06 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:06.155081 | orchestrator | 2026-02-02 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:09.242859 | orchestrator | 2026-02-02 00:52:09 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:09.244664 | orchestrator | 2026-02-02 00:52:09 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:52:09.247690 | orchestrator | 2026-02-02 00:52:09 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:09.250645 | orchestrator | 2026-02-02 00:52:09 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:09.250680 | orchestrator | 2026-02-02 00:52:09 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:09.250689 | orchestrator | 2026-02-02 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:12.420203 | orchestrator | 2026-02-02 00:52:12 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:12.422878 | orchestrator | 2026-02-02 00:52:12 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state STARTED 2026-02-02 00:52:12.423791 | orchestrator | 2026-02-02 00:52:12 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:12.425491 | orchestrator | 2026-02-02 00:52:12 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:12.426986 | orchestrator | 2026-02-02 00:52:12 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:12.427032 | orchestrator | 2026-02-02 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:15.501048 | orchestrator | 2026-02-02 00:52:15 | INFO  | Task b8cf4bba-e3db-4370-b1aa-e63ed8a0d093 is in state STARTED 2026-02-02 00:52:15.507463 | orchestrator | 2026-02-02 00:52:15 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:15.509206 | orchestrator | 2026-02-02 00:52:15 | INFO  | Task 60cf2049-b787-40f5-b786-8b270672f55e is in state SUCCESS 2026-02-02 00:52:15.510731 | orchestrator | 2026-02-02 00:52:15.510784 | orchestrator | 2026-02-02 00:52:15.510798 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-02 00:52:15.510859 | orchestrator | 2026-02-02 00:52:15.510873 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-02 00:52:15.510928 | orchestrator | Monday 02 February 2026 00:47:35 +0000 (0:00:00.200) 0:00:00.200 ******* 2026-02-02 00:52:15.510939 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:52:15.510971 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:52:15.510982 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:52:15.510993 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.511004 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.511015 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.511025 | orchestrator | 2026-02-02 00:52:15.511037 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-02 00:52:15.511048 | orchestrator | Monday 02 February 2026 00:47:36 +0000 (0:00:00.801) 0:00:01.001 ******* 2026-02-02 00:52:15.511059 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.511071 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.511082 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.511093 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.511104 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.511114 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.511125 | orchestrator | 2026-02-02 00:52:15.511136 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-02 00:52:15.511215 | orchestrator | Monday 02 February 2026 00:47:37 +0000 (0:00:00.776) 0:00:01.777 ******* 2026-02-02 00:52:15.511238 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.511257 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.511277 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.511295 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.511315 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.511334 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.511353 | orchestrator | 2026-02-02 00:52:15.511373 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-02 00:52:15.511391 | orchestrator | Monday 02 February 2026 00:47:38 +0000 (0:00:00.702) 0:00:02.479 ******* 2026-02-02 00:52:15.511410 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:52:15.511428 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:52:15.511446 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:52:15.511457 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.511468 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.511479 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.511491 | orchestrator | 2026-02-02 00:52:15.511502 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-02 00:52:15.511513 | orchestrator | Monday 02 February 2026 00:47:40 +0000 (0:00:02.125) 0:00:04.605 ******* 2026-02-02 00:52:15.511524 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:52:15.511535 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:52:15.511545 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:52:15.511556 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.511566 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.511577 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.511588 | orchestrator | 2026-02-02 00:52:15.511599 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-02 00:52:15.511609 | orchestrator | Monday 02 February 2026 00:47:41 +0000 (0:00:01.257) 0:00:05.863 ******* 2026-02-02 00:52:15.511659 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:52:15.511671 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:52:15.511682 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:52:15.511693 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.511703 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.511714 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.511725 | orchestrator | 2026-02-02 00:52:15.511736 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-02 00:52:15.511747 | orchestrator | Monday 02 February 2026 00:47:42 +0000 (0:00:01.110) 0:00:06.974 ******* 2026-02-02 00:52:15.511758 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.511769 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.511779 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.511790 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.511801 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.511811 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.511822 | orchestrator | 2026-02-02 00:52:15.511838 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-02 00:52:15.511852 | orchestrator | Monday 02 February 2026 00:47:43 +0000 (0:00:00.773) 0:00:07.747 ******* 2026-02-02 00:52:15.511863 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.511874 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.511885 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.511895 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.511906 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.511917 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.511928 | orchestrator | 2026-02-02 00:52:15.511939 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-02 00:52:15.511950 | orchestrator | Monday 02 February 2026 00:47:43 +0000 (0:00:00.640) 0:00:08.387 ******* 2026-02-02 00:52:15.511961 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 00:52:15.511984 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 00:52:15.511995 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.512006 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 00:52:15.512016 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 00:52:15.512027 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.512044 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 00:52:15.512063 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 00:52:15.512082 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.512101 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 00:52:15.512140 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 00:52:15.512158 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.512172 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 00:52:15.512183 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 00:52:15.512194 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.512212 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 00:52:15.512224 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 00:52:15.512235 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.512246 | orchestrator | 2026-02-02 00:52:15.512257 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-02 00:52:15.512268 | orchestrator | Monday 02 February 2026 00:47:44 +0000 (0:00:00.924) 0:00:09.312 ******* 2026-02-02 00:52:15.512279 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.512290 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.512301 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.512312 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.512323 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.512342 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.512353 | orchestrator | 2026-02-02 00:52:15.512364 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-02 00:52:15.512376 | orchestrator | Monday 02 February 2026 00:47:46 +0000 (0:00:01.545) 0:00:10.858 ******* 2026-02-02 00:52:15.512387 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:52:15.512398 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:52:15.512409 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:52:15.512420 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.512431 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.512441 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.512454 | orchestrator | 2026-02-02 00:52:15.512473 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-02 00:52:15.512492 | orchestrator | Monday 02 February 2026 00:47:47 +0000 (0:00:01.230) 0:00:12.088 ******* 2026-02-02 00:52:15.512510 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.512529 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.512548 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:52:15.512565 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.512585 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:52:15.512604 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:52:15.512651 | orchestrator | 2026-02-02 00:52:15.512670 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-02 00:52:15.512690 | orchestrator | Monday 02 February 2026 00:47:53 +0000 (0:00:05.601) 0:00:17.690 ******* 2026-02-02 00:52:15.512709 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.512728 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.512740 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.512770 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.512790 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.512809 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.512827 | orchestrator | 2026-02-02 00:52:15.512846 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-02 00:52:15.512860 | orchestrator | Monday 02 February 2026 00:47:55 +0000 (0:00:01.958) 0:00:19.648 ******* 2026-02-02 00:52:15.512875 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.512894 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.512913 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.512932 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.512952 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.512970 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.512989 | orchestrator | 2026-02-02 00:52:15.513007 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-02 00:52:15.513028 | orchestrator | Monday 02 February 2026 00:47:58 +0000 (0:00:03.065) 0:00:22.714 ******* 2026-02-02 00:52:15.513045 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.513062 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.513081 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.513096 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.513111 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.513126 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.513144 | orchestrator | 2026-02-02 00:52:15.513162 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-02 00:52:15.513180 | orchestrator | Monday 02 February 2026 00:47:59 +0000 (0:00:01.266) 0:00:23.980 ******* 2026-02-02 00:52:15.513199 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-02 00:52:15.513219 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-02 00:52:15.513237 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.513255 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-02 00:52:15.513274 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-02 00:52:15.513291 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.513310 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-02 00:52:15.513329 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-02 00:52:15.513342 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.513353 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-02 00:52:15.513364 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-02 00:52:15.513375 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.513386 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-02 00:52:15.513397 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-02 00:52:15.513408 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.513419 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-02 00:52:15.513430 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-02 00:52:15.513441 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.513452 | orchestrator | 2026-02-02 00:52:15.513464 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-02 00:52:15.513488 | orchestrator | Monday 02 February 2026 00:48:00 +0000 (0:00:01.335) 0:00:25.315 ******* 2026-02-02 00:52:15.513500 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.513513 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.513531 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.513551 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.513569 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.513588 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.513607 | orchestrator | 2026-02-02 00:52:15.513687 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-02 00:52:15.513714 | orchestrator | Monday 02 February 2026 00:48:02 +0000 (0:00:01.229) 0:00:26.544 ******* 2026-02-02 00:52:15.513725 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.513735 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.513745 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.513754 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.513764 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.513773 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.513783 | orchestrator | 2026-02-02 00:52:15.513793 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-02 00:52:15.513803 | orchestrator | 2026-02-02 00:52:15.513813 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-02 00:52:15.513823 | orchestrator | Monday 02 February 2026 00:48:03 +0000 (0:00:01.821) 0:00:28.366 ******* 2026-02-02 00:52:15.513833 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.513842 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.513852 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.513862 | orchestrator | 2026-02-02 00:52:15.513872 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-02 00:52:15.513882 | orchestrator | Monday 02 February 2026 00:48:05 +0000 (0:00:01.821) 0:00:30.188 ******* 2026-02-02 00:52:15.513892 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.513919 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.513930 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.513939 | orchestrator | 2026-02-02 00:52:15.513949 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-02 00:52:15.514861 | orchestrator | Monday 02 February 2026 00:48:07 +0000 (0:00:01.575) 0:00:31.763 ******* 2026-02-02 00:52:15.514908 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.514920 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.514931 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.514942 | orchestrator | 2026-02-02 00:52:15.514954 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-02 00:52:15.514966 | orchestrator | Monday 02 February 2026 00:48:08 +0000 (0:00:01.003) 0:00:32.767 ******* 2026-02-02 00:52:15.514977 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.514988 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.515009 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.515025 | orchestrator | 2026-02-02 00:52:15.515042 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-02 00:52:15.515058 | orchestrator | Monday 02 February 2026 00:48:09 +0000 (0:00:00.765) 0:00:33.533 ******* 2026-02-02 00:52:15.515074 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.515089 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.515106 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.515121 | orchestrator | 2026-02-02 00:52:15.515137 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-02 00:52:15.515155 | orchestrator | Monday 02 February 2026 00:48:09 +0000 (0:00:00.709) 0:00:34.242 ******* 2026-02-02 00:52:15.515172 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.515188 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.515201 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.515212 | orchestrator | 2026-02-02 00:52:15.515222 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-02 00:52:15.515232 | orchestrator | Monday 02 February 2026 00:48:11 +0000 (0:00:01.554) 0:00:35.797 ******* 2026-02-02 00:52:15.515242 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.515251 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.515261 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.515271 | orchestrator | 2026-02-02 00:52:15.515281 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-02 00:52:15.515290 | orchestrator | Monday 02 February 2026 00:48:13 +0000 (0:00:01.652) 0:00:37.449 ******* 2026-02-02 00:52:15.515300 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:52:15.515324 | orchestrator | 2026-02-02 00:52:15.515334 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-02 00:52:15.515344 | orchestrator | Monday 02 February 2026 00:48:13 +0000 (0:00:00.545) 0:00:37.995 ******* 2026-02-02 00:52:15.515354 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.515363 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.515373 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.515383 | orchestrator | 2026-02-02 00:52:15.515392 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-02 00:52:15.515402 | orchestrator | Monday 02 February 2026 00:48:17 +0000 (0:00:03.442) 0:00:41.438 ******* 2026-02-02 00:52:15.515412 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.515422 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.515431 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.515441 | orchestrator | 2026-02-02 00:52:15.515450 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-02 00:52:15.515460 | orchestrator | Monday 02 February 2026 00:48:17 +0000 (0:00:00.781) 0:00:42.220 ******* 2026-02-02 00:52:15.515470 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.515480 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.515489 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.515499 | orchestrator | 2026-02-02 00:52:15.515509 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-02 00:52:15.515519 | orchestrator | Monday 02 February 2026 00:48:18 +0000 (0:00:01.036) 0:00:43.256 ******* 2026-02-02 00:52:15.515528 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.515538 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.515548 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.515558 | orchestrator | 2026-02-02 00:52:15.515567 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-02 00:52:15.515590 | orchestrator | Monday 02 February 2026 00:48:20 +0000 (0:00:01.534) 0:00:44.790 ******* 2026-02-02 00:52:15.515600 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.515610 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.515683 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.515694 | orchestrator | 2026-02-02 00:52:15.515704 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-02 00:52:15.515714 | orchestrator | Monday 02 February 2026 00:48:21 +0000 (0:00:00.699) 0:00:45.490 ******* 2026-02-02 00:52:15.515724 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.515734 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.515743 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.515753 | orchestrator | 2026-02-02 00:52:15.515763 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-02 00:52:15.515773 | orchestrator | Monday 02 February 2026 00:48:21 +0000 (0:00:00.608) 0:00:46.098 ******* 2026-02-02 00:52:15.515782 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.515792 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.515802 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.515811 | orchestrator | 2026-02-02 00:52:15.515821 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-02 00:52:15.515831 | orchestrator | Monday 02 February 2026 00:48:23 +0000 (0:00:01.353) 0:00:47.452 ******* 2026-02-02 00:52:15.515841 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.515850 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.515860 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.515870 | orchestrator | 2026-02-02 00:52:15.515880 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-02 00:52:15.515890 | orchestrator | Monday 02 February 2026 00:48:25 +0000 (0:00:02.541) 0:00:49.994 ******* 2026-02-02 00:52:15.515899 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.515909 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.515919 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.515929 | orchestrator | 2026-02-02 00:52:15.515947 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-02 00:52:15.515957 | orchestrator | Monday 02 February 2026 00:48:26 +0000 (0:00:00.555) 0:00:50.549 ******* 2026-02-02 00:52:15.515967 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-02 00:52:15.515979 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-02 00:52:15.515995 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-02 00:52:15.516005 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-02 00:52:15.516015 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-02 00:52:15.516024 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-02 00:52:15.516034 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-02 00:52:15.516044 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-02 00:52:15.516054 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-02 00:52:15.516064 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-02 00:52:15.516073 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-02 00:52:15.516083 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-02 00:52:15.516093 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.516103 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.516112 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.516122 | orchestrator | 2026-02-02 00:52:15.516132 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-02 00:52:15.516142 | orchestrator | Monday 02 February 2026 00:49:09 +0000 (0:00:43.281) 0:01:33.831 ******* 2026-02-02 00:52:15.516152 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.516162 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.516172 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.516181 | orchestrator | 2026-02-02 00:52:15.516189 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-02 00:52:15.516197 | orchestrator | Monday 02 February 2026 00:49:09 +0000 (0:00:00.245) 0:01:34.077 ******* 2026-02-02 00:52:15.516205 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.516213 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.516221 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.516229 | orchestrator | 2026-02-02 00:52:15.516237 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-02 00:52:15.516245 | orchestrator | Monday 02 February 2026 00:49:10 +0000 (0:00:01.092) 0:01:35.169 ******* 2026-02-02 00:52:15.516253 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.516261 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.516269 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.516277 | orchestrator | 2026-02-02 00:52:15.516291 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-02 00:52:15.516300 | orchestrator | Monday 02 February 2026 00:49:12 +0000 (0:00:01.484) 0:01:36.653 ******* 2026-02-02 00:52:15.516314 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.516322 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.516330 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.516338 | orchestrator | 2026-02-02 00:52:15.516346 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-02 00:52:15.516355 | orchestrator | Monday 02 February 2026 00:49:36 +0000 (0:00:24.337) 0:02:00.991 ******* 2026-02-02 00:52:15.516363 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.516371 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.516379 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.516387 | orchestrator | 2026-02-02 00:52:15.516395 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-02 00:52:15.516403 | orchestrator | Monday 02 February 2026 00:49:37 +0000 (0:00:00.920) 0:02:01.912 ******* 2026-02-02 00:52:15.516411 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.516419 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.516427 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.516435 | orchestrator | 2026-02-02 00:52:15.516443 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-02 00:52:15.516451 | orchestrator | Monday 02 February 2026 00:49:38 +0000 (0:00:01.014) 0:02:02.926 ******* 2026-02-02 00:52:15.516459 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.516467 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.516475 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.516483 | orchestrator | 2026-02-02 00:52:15.516491 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-02 00:52:15.516499 | orchestrator | Monday 02 February 2026 00:49:39 +0000 (0:00:00.784) 0:02:03.711 ******* 2026-02-02 00:52:15.516508 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.516516 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.516524 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.516532 | orchestrator | 2026-02-02 00:52:15.516540 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-02 00:52:15.516548 | orchestrator | Monday 02 February 2026 00:49:40 +0000 (0:00:01.076) 0:02:04.788 ******* 2026-02-02 00:52:15.516556 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.516564 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.516572 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.516580 | orchestrator | 2026-02-02 00:52:15.516588 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-02 00:52:15.516596 | orchestrator | Monday 02 February 2026 00:49:40 +0000 (0:00:00.379) 0:02:05.167 ******* 2026-02-02 00:52:15.516604 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.516637 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.516647 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.516655 | orchestrator | 2026-02-02 00:52:15.516663 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-02 00:52:15.516671 | orchestrator | Monday 02 February 2026 00:49:41 +0000 (0:00:00.666) 0:02:05.834 ******* 2026-02-02 00:52:15.516680 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.516688 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.516696 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.516704 | orchestrator | 2026-02-02 00:52:15.516712 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-02 00:52:15.516720 | orchestrator | Monday 02 February 2026 00:49:42 +0000 (0:00:00.621) 0:02:06.455 ******* 2026-02-02 00:52:15.516728 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.516736 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.516744 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.516752 | orchestrator | 2026-02-02 00:52:15.516760 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-02 00:52:15.516768 | orchestrator | Monday 02 February 2026 00:49:43 +0000 (0:00:01.101) 0:02:07.557 ******* 2026-02-02 00:52:15.516776 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:52:15.516784 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:52:15.516798 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:52:15.516806 | orchestrator | 2026-02-02 00:52:15.516814 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-02 00:52:15.516822 | orchestrator | Monday 02 February 2026 00:49:43 +0000 (0:00:00.760) 0:02:08.317 ******* 2026-02-02 00:52:15.516831 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.516838 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.516846 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.516854 | orchestrator | 2026-02-02 00:52:15.516862 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-02 00:52:15.516870 | orchestrator | Monday 02 February 2026 00:49:44 +0000 (0:00:00.331) 0:02:08.649 ******* 2026-02-02 00:52:15.516878 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.516887 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.516895 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.516902 | orchestrator | 2026-02-02 00:52:15.516910 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-02 00:52:15.516919 | orchestrator | Monday 02 February 2026 00:49:44 +0000 (0:00:00.285) 0:02:08.934 ******* 2026-02-02 00:52:15.516927 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.516935 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.516943 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.516951 | orchestrator | 2026-02-02 00:52:15.516959 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-02 00:52:15.516967 | orchestrator | Monday 02 February 2026 00:49:45 +0000 (0:00:00.975) 0:02:09.909 ******* 2026-02-02 00:52:15.516975 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.516983 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.516991 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.516998 | orchestrator | 2026-02-02 00:52:15.517007 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-02 00:52:15.517015 | orchestrator | Monday 02 February 2026 00:49:46 +0000 (0:00:00.629) 0:02:10.539 ******* 2026-02-02 00:52:15.517023 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-02 00:52:15.517036 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-02 00:52:15.517045 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-02 00:52:15.517053 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-02 00:52:15.517061 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-02 00:52:15.517069 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-02 00:52:15.517077 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-02 00:52:15.517085 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-02 00:52:15.517094 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-02 00:52:15.517102 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-02 00:52:15.517110 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-02 00:52:15.517118 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-02 00:52:15.517126 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-02 00:52:15.517134 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-02 00:52:15.517142 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-02 00:52:15.517155 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-02 00:52:15.517163 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-02 00:52:15.517171 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-02 00:52:15.517179 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-02 00:52:15.517191 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-02 00:52:15.517199 | orchestrator | 2026-02-02 00:52:15.517207 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-02 00:52:15.517215 | orchestrator | 2026-02-02 00:52:15.517223 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-02 00:52:15.517231 | orchestrator | Monday 02 February 2026 00:49:48 +0000 (0:00:02.733) 0:02:13.272 ******* 2026-02-02 00:52:15.517239 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:52:15.517247 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:52:15.517255 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:52:15.517263 | orchestrator | 2026-02-02 00:52:15.517271 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-02 00:52:15.517279 | orchestrator | Monday 02 February 2026 00:49:49 +0000 (0:00:00.446) 0:02:13.718 ******* 2026-02-02 00:52:15.517287 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:52:15.517295 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:52:15.517303 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:52:15.517311 | orchestrator | 2026-02-02 00:52:15.517319 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-02 00:52:15.517327 | orchestrator | Monday 02 February 2026 00:49:49 +0000 (0:00:00.579) 0:02:14.298 ******* 2026-02-02 00:52:15.517335 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:52:15.517343 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:52:15.517352 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:52:15.517359 | orchestrator | 2026-02-02 00:52:15.517368 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-02 00:52:15.517413 | orchestrator | Monday 02 February 2026 00:49:50 +0000 (0:00:00.294) 0:02:14.592 ******* 2026-02-02 00:52:15.517423 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:52:15.517431 | orchestrator | 2026-02-02 00:52:15.517439 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-02 00:52:15.517447 | orchestrator | Monday 02 February 2026 00:49:50 +0000 (0:00:00.492) 0:02:15.085 ******* 2026-02-02 00:52:15.517458 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.517473 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.517487 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.517500 | orchestrator | 2026-02-02 00:52:15.517514 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-02 00:52:15.517527 | orchestrator | Monday 02 February 2026 00:49:50 +0000 (0:00:00.249) 0:02:15.335 ******* 2026-02-02 00:52:15.517541 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.517557 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.517571 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.517584 | orchestrator | 2026-02-02 00:52:15.517593 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-02 00:52:15.517601 | orchestrator | Monday 02 February 2026 00:49:51 +0000 (0:00:00.246) 0:02:15.582 ******* 2026-02-02 00:52:15.517609 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.517642 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.517650 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.517658 | orchestrator | 2026-02-02 00:52:15.517666 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-02 00:52:15.517674 | orchestrator | Monday 02 February 2026 00:49:51 +0000 (0:00:00.297) 0:02:15.879 ******* 2026-02-02 00:52:15.517683 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:52:15.517697 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:52:15.517706 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:52:15.517714 | orchestrator | 2026-02-02 00:52:15.517731 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-02 00:52:15.517745 | orchestrator | Monday 02 February 2026 00:49:52 +0000 (0:00:00.798) 0:02:16.678 ******* 2026-02-02 00:52:15.517759 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:52:15.517773 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:52:15.517787 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:52:15.517802 | orchestrator | 2026-02-02 00:52:15.517814 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-02 00:52:15.517827 | orchestrator | Monday 02 February 2026 00:49:53 +0000 (0:00:01.132) 0:02:17.810 ******* 2026-02-02 00:52:15.517840 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:52:15.517853 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:52:15.517866 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:52:15.517877 | orchestrator | 2026-02-02 00:52:15.517890 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-02 00:52:15.517903 | orchestrator | Monday 02 February 2026 00:49:54 +0000 (0:00:01.201) 0:02:19.012 ******* 2026-02-02 00:52:15.517915 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:52:15.517927 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:52:15.517939 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:52:15.517953 | orchestrator | 2026-02-02 00:52:15.517966 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-02 00:52:15.517978 | orchestrator | 2026-02-02 00:52:15.517991 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-02 00:52:15.518004 | orchestrator | Monday 02 February 2026 00:50:04 +0000 (0:00:10.038) 0:02:29.050 ******* 2026-02-02 00:52:15.518052 | orchestrator | ok: [testbed-manager] 2026-02-02 00:52:15.518070 | orchestrator | 2026-02-02 00:52:15.518083 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-02 00:52:15.518097 | orchestrator | Monday 02 February 2026 00:50:05 +0000 (0:00:00.878) 0:02:29.929 ******* 2026-02-02 00:52:15.518109 | orchestrator | changed: [testbed-manager] 2026-02-02 00:52:15.518123 | orchestrator | 2026-02-02 00:52:15.518136 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-02 00:52:15.518149 | orchestrator | Monday 02 February 2026 00:50:05 +0000 (0:00:00.446) 0:02:30.376 ******* 2026-02-02 00:52:15.518162 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-02 00:52:15.518175 | orchestrator | 2026-02-02 00:52:15.518187 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-02 00:52:15.518201 | orchestrator | Monday 02 February 2026 00:50:06 +0000 (0:00:00.534) 0:02:30.910 ******* 2026-02-02 00:52:15.518215 | orchestrator | changed: [testbed-manager] 2026-02-02 00:52:15.518229 | orchestrator | 2026-02-02 00:52:15.518250 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-02 00:52:15.518264 | orchestrator | Monday 02 February 2026 00:50:07 +0000 (0:00:00.825) 0:02:31.736 ******* 2026-02-02 00:52:15.518277 | orchestrator | changed: [testbed-manager] 2026-02-02 00:52:15.518291 | orchestrator | 2026-02-02 00:52:15.518306 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-02 00:52:15.518318 | orchestrator | Monday 02 February 2026 00:50:07 +0000 (0:00:00.522) 0:02:32.259 ******* 2026-02-02 00:52:15.518333 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-02 00:52:15.518346 | orchestrator | 2026-02-02 00:52:15.518360 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-02 00:52:15.518372 | orchestrator | Monday 02 February 2026 00:50:09 +0000 (0:00:01.597) 0:02:33.856 ******* 2026-02-02 00:52:15.518385 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-02 00:52:15.518396 | orchestrator | 2026-02-02 00:52:15.518408 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-02 00:52:15.518419 | orchestrator | Monday 02 February 2026 00:50:10 +0000 (0:00:01.045) 0:02:34.902 ******* 2026-02-02 00:52:15.518444 | orchestrator | changed: [testbed-manager] 2026-02-02 00:52:15.518458 | orchestrator | 2026-02-02 00:52:15.518471 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-02 00:52:15.518483 | orchestrator | Monday 02 February 2026 00:50:11 +0000 (0:00:00.713) 0:02:35.616 ******* 2026-02-02 00:52:15.518497 | orchestrator | changed: [testbed-manager] 2026-02-02 00:52:15.518510 | orchestrator | 2026-02-02 00:52:15.518523 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-02 00:52:15.518536 | orchestrator | 2026-02-02 00:52:15.518548 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-02 00:52:15.518560 | orchestrator | Monday 02 February 2026 00:50:11 +0000 (0:00:00.657) 0:02:36.273 ******* 2026-02-02 00:52:15.518574 | orchestrator | ok: [testbed-manager] 2026-02-02 00:52:15.518587 | orchestrator | 2026-02-02 00:52:15.518602 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-02 00:52:15.518682 | orchestrator | Monday 02 February 2026 00:50:12 +0000 (0:00:00.154) 0:02:36.427 ******* 2026-02-02 00:52:15.518700 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 00:52:15.518713 | orchestrator | 2026-02-02 00:52:15.518726 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-02 00:52:15.518740 | orchestrator | Monday 02 February 2026 00:50:12 +0000 (0:00:00.247) 0:02:36.674 ******* 2026-02-02 00:52:15.518751 | orchestrator | ok: [testbed-manager] 2026-02-02 00:52:15.518760 | orchestrator | 2026-02-02 00:52:15.518768 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-02 00:52:15.518776 | orchestrator | Monday 02 February 2026 00:50:13 +0000 (0:00:00.971) 0:02:37.646 ******* 2026-02-02 00:52:15.518784 | orchestrator | ok: [testbed-manager] 2026-02-02 00:52:15.518792 | orchestrator | 2026-02-02 00:52:15.518800 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-02 00:52:15.518808 | orchestrator | Monday 02 February 2026 00:50:15 +0000 (0:00:01.969) 0:02:39.616 ******* 2026-02-02 00:52:15.518816 | orchestrator | changed: [testbed-manager] 2026-02-02 00:52:15.518824 | orchestrator | 2026-02-02 00:52:15.518832 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-02 00:52:15.518840 | orchestrator | Monday 02 February 2026 00:50:16 +0000 (0:00:00.929) 0:02:40.546 ******* 2026-02-02 00:52:15.518848 | orchestrator | ok: [testbed-manager] 2026-02-02 00:52:15.518856 | orchestrator | 2026-02-02 00:52:15.518877 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-02 00:52:15.518885 | orchestrator | Monday 02 February 2026 00:50:16 +0000 (0:00:00.364) 0:02:40.910 ******* 2026-02-02 00:52:15.518893 | orchestrator | changed: [testbed-manager] 2026-02-02 00:52:15.518901 | orchestrator | 2026-02-02 00:52:15.518950 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-02 00:52:15.518958 | orchestrator | Monday 02 February 2026 00:50:25 +0000 (0:00:08.600) 0:02:49.510 ******* 2026-02-02 00:52:15.518966 | orchestrator | changed: [testbed-manager] 2026-02-02 00:52:15.518974 | orchestrator | 2026-02-02 00:52:15.518982 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-02 00:52:15.518990 | orchestrator | Monday 02 February 2026 00:50:41 +0000 (0:00:16.099) 0:03:05.610 ******* 2026-02-02 00:52:15.518998 | orchestrator | ok: [testbed-manager] 2026-02-02 00:52:15.519006 | orchestrator | 2026-02-02 00:52:15.519015 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-02 00:52:15.519023 | orchestrator | 2026-02-02 00:52:15.519031 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-02 00:52:15.519039 | orchestrator | Monday 02 February 2026 00:50:41 +0000 (0:00:00.627) 0:03:06.237 ******* 2026-02-02 00:52:15.519047 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.519055 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.519063 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.519071 | orchestrator | 2026-02-02 00:52:15.519089 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-02 00:52:15.519097 | orchestrator | Monday 02 February 2026 00:50:42 +0000 (0:00:00.352) 0:03:06.590 ******* 2026-02-02 00:52:15.519105 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.519113 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.519121 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.519129 | orchestrator | 2026-02-02 00:52:15.519137 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-02 00:52:15.519145 | orchestrator | Monday 02 February 2026 00:50:42 +0000 (0:00:00.351) 0:03:06.941 ******* 2026-02-02 00:52:15.519153 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:52:15.519161 | orchestrator | 2026-02-02 00:52:15.519170 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-02 00:52:15.519176 | orchestrator | Monday 02 February 2026 00:50:43 +0000 (0:00:00.662) 0:03:07.604 ******* 2026-02-02 00:52:15.519183 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-02 00:52:15.519190 | orchestrator | 2026-02-02 00:52:15.519202 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-02 00:52:15.519209 | orchestrator | Monday 02 February 2026 00:50:44 +0000 (0:00:00.991) 0:03:08.595 ******* 2026-02-02 00:52:15.519216 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 00:52:15.519223 | orchestrator | 2026-02-02 00:52:15.519230 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-02 00:52:15.519237 | orchestrator | Monday 02 February 2026 00:50:45 +0000 (0:00:00.864) 0:03:09.460 ******* 2026-02-02 00:52:15.519243 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.519250 | orchestrator | 2026-02-02 00:52:15.519257 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-02 00:52:15.519264 | orchestrator | Monday 02 February 2026 00:50:45 +0000 (0:00:00.117) 0:03:09.577 ******* 2026-02-02 00:52:15.519271 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 00:52:15.519277 | orchestrator | 2026-02-02 00:52:15.519284 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-02 00:52:15.519291 | orchestrator | Monday 02 February 2026 00:50:46 +0000 (0:00:00.998) 0:03:10.576 ******* 2026-02-02 00:52:15.519298 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.519304 | orchestrator | 2026-02-02 00:52:15.519311 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-02 00:52:15.519318 | orchestrator | Monday 02 February 2026 00:50:46 +0000 (0:00:00.141) 0:03:10.717 ******* 2026-02-02 00:52:15.519325 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.519331 | orchestrator | 2026-02-02 00:52:15.519338 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-02 00:52:15.519345 | orchestrator | Monday 02 February 2026 00:50:46 +0000 (0:00:00.110) 0:03:10.828 ******* 2026-02-02 00:52:15.519352 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.519359 | orchestrator | 2026-02-02 00:52:15.519365 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-02 00:52:15.519372 | orchestrator | Monday 02 February 2026 00:50:46 +0000 (0:00:00.114) 0:03:10.942 ******* 2026-02-02 00:52:15.519379 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.519386 | orchestrator | 2026-02-02 00:52:15.519393 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-02 00:52:15.519399 | orchestrator | Monday 02 February 2026 00:50:46 +0000 (0:00:00.115) 0:03:11.057 ******* 2026-02-02 00:52:15.519406 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-02 00:52:15.519413 | orchestrator | 2026-02-02 00:52:15.519420 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-02 00:52:15.519427 | orchestrator | Monday 02 February 2026 00:50:52 +0000 (0:00:05.873) 0:03:16.931 ******* 2026-02-02 00:52:15.519433 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-02 00:52:15.519440 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-02 00:52:15.519458 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-02 00:52:15.519465 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-02 00:52:15.519472 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-02 00:52:15.519479 | orchestrator | 2026-02-02 00:52:15.519485 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-02 00:52:15.519492 | orchestrator | Monday 02 February 2026 00:51:40 +0000 (0:00:48.381) 0:04:05.312 ******* 2026-02-02 00:52:15.519504 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 00:52:15.519511 | orchestrator | 2026-02-02 00:52:15.519518 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-02 00:52:15.519525 | orchestrator | Monday 02 February 2026 00:51:42 +0000 (0:00:01.286) 0:04:06.599 ******* 2026-02-02 00:52:15.519532 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-02 00:52:15.519539 | orchestrator | 2026-02-02 00:52:15.519546 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-02 00:52:15.519552 | orchestrator | Monday 02 February 2026 00:51:43 +0000 (0:00:01.612) 0:04:08.211 ******* 2026-02-02 00:52:15.519559 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-02 00:52:15.519566 | orchestrator | 2026-02-02 00:52:15.519573 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-02 00:52:15.519579 | orchestrator | Monday 02 February 2026 00:51:44 +0000 (0:00:01.174) 0:04:09.385 ******* 2026-02-02 00:52:15.519586 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.519593 | orchestrator | 2026-02-02 00:52:15.519600 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-02 00:52:15.519606 | orchestrator | Monday 02 February 2026 00:51:45 +0000 (0:00:00.160) 0:04:09.545 ******* 2026-02-02 00:52:15.519633 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-02 00:52:15.519644 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-02 00:52:15.519656 | orchestrator | 2026-02-02 00:52:15.519667 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-02 00:52:15.519678 | orchestrator | Monday 02 February 2026 00:51:47 +0000 (0:00:02.212) 0:04:11.758 ******* 2026-02-02 00:52:15.519690 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.519697 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.519704 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.519710 | orchestrator | 2026-02-02 00:52:15.519717 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-02 00:52:15.519724 | orchestrator | Monday 02 February 2026 00:51:47 +0000 (0:00:00.420) 0:04:12.178 ******* 2026-02-02 00:52:15.519730 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.519737 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.519744 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.519750 | orchestrator | 2026-02-02 00:52:15.519757 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-02 00:52:15.519764 | orchestrator | 2026-02-02 00:52:15.519771 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-02 00:52:15.519781 | orchestrator | Monday 02 February 2026 00:51:48 +0000 (0:00:01.177) 0:04:13.356 ******* 2026-02-02 00:52:15.519788 | orchestrator | ok: [testbed-manager] 2026-02-02 00:52:15.519795 | orchestrator | 2026-02-02 00:52:15.519802 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-02 00:52:15.519808 | orchestrator | Monday 02 February 2026 00:51:49 +0000 (0:00:00.192) 0:04:13.548 ******* 2026-02-02 00:52:15.519815 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 00:52:15.519822 | orchestrator | 2026-02-02 00:52:15.519829 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-02 00:52:15.519841 | orchestrator | Monday 02 February 2026 00:51:49 +0000 (0:00:00.231) 0:04:13.780 ******* 2026-02-02 00:52:15.519848 | orchestrator | changed: [testbed-manager] 2026-02-02 00:52:15.519854 | orchestrator | 2026-02-02 00:52:15.519861 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-02 00:52:15.519868 | orchestrator | 2026-02-02 00:52:15.519874 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-02 00:52:15.519881 | orchestrator | Monday 02 February 2026 00:51:54 +0000 (0:00:05.551) 0:04:19.332 ******* 2026-02-02 00:52:15.519888 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:52:15.519895 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:52:15.519901 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:52:15.519908 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:52:15.519915 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:52:15.519922 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:52:15.519928 | orchestrator | 2026-02-02 00:52:15.519935 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-02 00:52:15.519942 | orchestrator | Monday 02 February 2026 00:51:55 +0000 (0:00:00.894) 0:04:20.226 ******* 2026-02-02 00:52:15.519948 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-02 00:52:15.519955 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-02 00:52:15.519962 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-02 00:52:15.519969 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-02 00:52:15.519975 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-02 00:52:15.519982 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-02 00:52:15.519988 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-02 00:52:15.519995 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-02 00:52:15.520002 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-02 00:52:15.520008 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-02 00:52:15.520015 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-02 00:52:15.520022 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-02 00:52:15.520033 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-02 00:52:15.520040 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-02 00:52:15.520047 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-02 00:52:15.520054 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-02 00:52:15.520061 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-02 00:52:15.520067 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-02 00:52:15.520074 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-02 00:52:15.520081 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-02 00:52:15.520087 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-02 00:52:15.520094 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-02 00:52:15.520101 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-02 00:52:15.520107 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-02 00:52:15.520114 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-02 00:52:15.520141 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-02 00:52:15.520148 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-02 00:52:15.520155 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-02 00:52:15.520162 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-02 00:52:15.520168 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-02 00:52:15.520175 | orchestrator | 2026-02-02 00:52:15.520182 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-02 00:52:15.520189 | orchestrator | Monday 02 February 2026 00:52:10 +0000 (0:00:15.122) 0:04:35.348 ******* 2026-02-02 00:52:15.520195 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.520206 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.520212 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.520219 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.520226 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.520233 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.520239 | orchestrator | 2026-02-02 00:52:15.520246 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-02 00:52:15.520253 | orchestrator | Monday 02 February 2026 00:52:12 +0000 (0:00:01.042) 0:04:36.390 ******* 2026-02-02 00:52:15.520269 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:52:15.520276 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:52:15.520283 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:52:15.520290 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:52:15.520296 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:52:15.520303 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:52:15.520309 | orchestrator | 2026-02-02 00:52:15.520316 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:52:15.520323 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:52:15.520332 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-02 00:52:15.520339 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-02 00:52:15.520346 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-02 00:52:15.520353 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-02 00:52:15.520360 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-02 00:52:15.520367 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-02 00:52:15.520373 | orchestrator | 2026-02-02 00:52:15.520380 | orchestrator | 2026-02-02 00:52:15.520387 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:52:15.520394 | orchestrator | Monday 02 February 2026 00:52:12 +0000 (0:00:00.542) 0:04:36.933 ******* 2026-02-02 00:52:15.520401 | orchestrator | =============================================================================== 2026-02-02 00:52:15.520407 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 48.38s 2026-02-02 00:52:15.520414 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.28s 2026-02-02 00:52:15.520425 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.34s 2026-02-02 00:52:15.520436 | orchestrator | kubectl : Install required packages ------------------------------------ 16.10s 2026-02-02 00:52:15.520443 | orchestrator | Manage labels ---------------------------------------------------------- 15.12s 2026-02-02 00:52:15.520450 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.04s 2026-02-02 00:52:15.520457 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.60s 2026-02-02 00:52:15.520463 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.87s 2026-02-02 00:52:15.520470 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.60s 2026-02-02 00:52:15.520477 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.55s 2026-02-02 00:52:15.520484 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.44s 2026-02-02 00:52:15.520490 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.07s 2026-02-02 00:52:15.520497 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.73s 2026-02-02 00:52:15.520504 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.54s 2026-02-02 00:52:15.520511 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.21s 2026-02-02 00:52:15.520517 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.13s 2026-02-02 00:52:15.520524 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.97s 2026-02-02 00:52:15.520531 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.96s 2026-02-02 00:52:15.520537 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.82s 2026-02-02 00:52:15.520544 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.82s 2026-02-02 00:52:15.520675 | orchestrator | 2026-02-02 00:52:15 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:15.520685 | orchestrator | 2026-02-02 00:52:15 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:15.520692 | orchestrator | 2026-02-02 00:52:15 | INFO  | Task 3e43a865-277a-4656-86de-13f5722cd1a2 is in state STARTED 2026-02-02 00:52:15.521903 | orchestrator | 2026-02-02 00:52:15 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:15.521918 | orchestrator | 2026-02-02 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:18.560417 | orchestrator | 2026-02-02 00:52:18 | INFO  | Task b8cf4bba-e3db-4370-b1aa-e63ed8a0d093 is in state STARTED 2026-02-02 00:52:18.560517 | orchestrator | 2026-02-02 00:52:18 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:18.560997 | orchestrator | 2026-02-02 00:52:18 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:18.562668 | orchestrator | 2026-02-02 00:52:18 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:18.564096 | orchestrator | 2026-02-02 00:52:18 | INFO  | Task 3e43a865-277a-4656-86de-13f5722cd1a2 is in state STARTED 2026-02-02 00:52:18.565471 | orchestrator | 2026-02-02 00:52:18 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:18.565497 | orchestrator | 2026-02-02 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:21.634492 | orchestrator | 2026-02-02 00:52:21 | INFO  | Task b8cf4bba-e3db-4370-b1aa-e63ed8a0d093 is in state STARTED 2026-02-02 00:52:21.634596 | orchestrator | 2026-02-02 00:52:21 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:21.636045 | orchestrator | 2026-02-02 00:52:21 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:21.636112 | orchestrator | 2026-02-02 00:52:21 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:21.636120 | orchestrator | 2026-02-02 00:52:21 | INFO  | Task 3e43a865-277a-4656-86de-13f5722cd1a2 is in state STARTED 2026-02-02 00:52:21.636128 | orchestrator | 2026-02-02 00:52:21 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:21.636142 | orchestrator | 2026-02-02 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:24.686583 | orchestrator | 2026-02-02 00:52:24 | INFO  | Task b8cf4bba-e3db-4370-b1aa-e63ed8a0d093 is in state SUCCESS 2026-02-02 00:52:24.693362 | orchestrator | 2026-02-02 00:52:24 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:24.711149 | orchestrator | 2026-02-02 00:52:24 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:24.715194 | orchestrator | 2026-02-02 00:52:24 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:24.718844 | orchestrator | 2026-02-02 00:52:24 | INFO  | Task 3e43a865-277a-4656-86de-13f5722cd1a2 is in state STARTED 2026-02-02 00:52:24.719184 | orchestrator | 2026-02-02 00:52:24 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:24.719320 | orchestrator | 2026-02-02 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:27.770434 | orchestrator | 2026-02-02 00:52:27 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:27.770553 | orchestrator | 2026-02-02 00:52:27 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:27.771401 | orchestrator | 2026-02-02 00:52:27 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:27.772411 | orchestrator | 2026-02-02 00:52:27 | INFO  | Task 3e43a865-277a-4656-86de-13f5722cd1a2 is in state STARTED 2026-02-02 00:52:27.774503 | orchestrator | 2026-02-02 00:52:27 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:27.774580 | orchestrator | 2026-02-02 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:30.846370 | orchestrator | 2026-02-02 00:52:30 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:30.848743 | orchestrator | 2026-02-02 00:52:30 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:30.850196 | orchestrator | 2026-02-02 00:52:30 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:30.851466 | orchestrator | 2026-02-02 00:52:30 | INFO  | Task 3e43a865-277a-4656-86de-13f5722cd1a2 is in state SUCCESS 2026-02-02 00:52:30.853687 | orchestrator | 2026-02-02 00:52:30 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:30.853720 | orchestrator | 2026-02-02 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:33.889344 | orchestrator | 2026-02-02 00:52:33 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:33.890774 | orchestrator | 2026-02-02 00:52:33 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:33.892666 | orchestrator | 2026-02-02 00:52:33 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:33.894967 | orchestrator | 2026-02-02 00:52:33 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:33.895046 | orchestrator | 2026-02-02 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:36.933524 | orchestrator | 2026-02-02 00:52:36 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:36.937146 | orchestrator | 2026-02-02 00:52:36 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:36.937229 | orchestrator | 2026-02-02 00:52:36 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:36.937245 | orchestrator | 2026-02-02 00:52:36 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:36.937258 | orchestrator | 2026-02-02 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:39.972987 | orchestrator | 2026-02-02 00:52:39 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:39.973747 | orchestrator | 2026-02-02 00:52:39 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:39.975908 | orchestrator | 2026-02-02 00:52:39 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:39.976723 | orchestrator | 2026-02-02 00:52:39 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:39.976771 | orchestrator | 2026-02-02 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:43.009636 | orchestrator | 2026-02-02 00:52:43 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:43.010310 | orchestrator | 2026-02-02 00:52:43 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:43.011495 | orchestrator | 2026-02-02 00:52:43 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:43.012323 | orchestrator | 2026-02-02 00:52:43 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:43.012377 | orchestrator | 2026-02-02 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:46.046614 | orchestrator | 2026-02-02 00:52:46 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:46.048181 | orchestrator | 2026-02-02 00:52:46 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:46.048718 | orchestrator | 2026-02-02 00:52:46 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:46.049507 | orchestrator | 2026-02-02 00:52:46 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:46.049567 | orchestrator | 2026-02-02 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:49.085500 | orchestrator | 2026-02-02 00:52:49 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:49.087347 | orchestrator | 2026-02-02 00:52:49 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:49.089551 | orchestrator | 2026-02-02 00:52:49 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:49.091524 | orchestrator | 2026-02-02 00:52:49 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:49.091537 | orchestrator | 2026-02-02 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:52.133033 | orchestrator | 2026-02-02 00:52:52 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:52.133105 | orchestrator | 2026-02-02 00:52:52 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:52.133320 | orchestrator | 2026-02-02 00:52:52 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:52.136488 | orchestrator | 2026-02-02 00:52:52 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:52.136525 | orchestrator | 2026-02-02 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:55.175221 | orchestrator | 2026-02-02 00:52:55 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:55.180306 | orchestrator | 2026-02-02 00:52:55 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:55.184866 | orchestrator | 2026-02-02 00:52:55 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:55.187521 | orchestrator | 2026-02-02 00:52:55 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:55.187604 | orchestrator | 2026-02-02 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:52:58.217200 | orchestrator | 2026-02-02 00:52:58 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:52:58.220018 | orchestrator | 2026-02-02 00:52:58 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:52:58.222534 | orchestrator | 2026-02-02 00:52:58 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:52:58.224775 | orchestrator | 2026-02-02 00:52:58 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:52:58.224844 | orchestrator | 2026-02-02 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:01.261513 | orchestrator | 2026-02-02 00:53:01 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:01.261983 | orchestrator | 2026-02-02 00:53:01 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:01.262862 | orchestrator | 2026-02-02 00:53:01 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:01.263693 | orchestrator | 2026-02-02 00:53:01 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:01.263873 | orchestrator | 2026-02-02 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:04.304232 | orchestrator | 2026-02-02 00:53:04 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:04.304693 | orchestrator | 2026-02-02 00:53:04 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:04.305643 | orchestrator | 2026-02-02 00:53:04 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:04.306602 | orchestrator | 2026-02-02 00:53:04 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:04.306638 | orchestrator | 2026-02-02 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:07.351258 | orchestrator | 2026-02-02 00:53:07 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:07.351938 | orchestrator | 2026-02-02 00:53:07 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:07.353595 | orchestrator | 2026-02-02 00:53:07 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:07.354164 | orchestrator | 2026-02-02 00:53:07 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:07.354199 | orchestrator | 2026-02-02 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:10.379465 | orchestrator | 2026-02-02 00:53:10 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:10.379714 | orchestrator | 2026-02-02 00:53:10 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:10.380461 | orchestrator | 2026-02-02 00:53:10 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:10.381209 | orchestrator | 2026-02-02 00:53:10 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:10.381251 | orchestrator | 2026-02-02 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:13.423271 | orchestrator | 2026-02-02 00:53:13 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:13.425963 | orchestrator | 2026-02-02 00:53:13 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:13.427747 | orchestrator | 2026-02-02 00:53:13 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:13.431347 | orchestrator | 2026-02-02 00:53:13 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:13.431424 | orchestrator | 2026-02-02 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:16.466940 | orchestrator | 2026-02-02 00:53:16 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:16.467546 | orchestrator | 2026-02-02 00:53:16 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:16.468508 | orchestrator | 2026-02-02 00:53:16 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:16.470870 | orchestrator | 2026-02-02 00:53:16 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:16.472320 | orchestrator | 2026-02-02 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:19.510967 | orchestrator | 2026-02-02 00:53:19 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:19.511593 | orchestrator | 2026-02-02 00:53:19 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:19.512605 | orchestrator | 2026-02-02 00:53:19 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:19.513374 | orchestrator | 2026-02-02 00:53:19 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:19.513396 | orchestrator | 2026-02-02 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:22.555030 | orchestrator | 2026-02-02 00:53:22 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:22.556825 | orchestrator | 2026-02-02 00:53:22 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:22.557475 | orchestrator | 2026-02-02 00:53:22 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:22.561655 | orchestrator | 2026-02-02 00:53:22 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:22.561713 | orchestrator | 2026-02-02 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:25.588888 | orchestrator | 2026-02-02 00:53:25 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:25.590166 | orchestrator | 2026-02-02 00:53:25 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:25.590762 | orchestrator | 2026-02-02 00:53:25 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:25.591589 | orchestrator | 2026-02-02 00:53:25 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:25.591616 | orchestrator | 2026-02-02 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:28.626973 | orchestrator | 2026-02-02 00:53:28 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:28.627571 | orchestrator | 2026-02-02 00:53:28 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:28.628007 | orchestrator | 2026-02-02 00:53:28 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:28.628774 | orchestrator | 2026-02-02 00:53:28 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:28.628969 | orchestrator | 2026-02-02 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:31.655198 | orchestrator | 2026-02-02 00:53:31 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:31.655675 | orchestrator | 2026-02-02 00:53:31 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:31.656085 | orchestrator | 2026-02-02 00:53:31 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:31.656942 | orchestrator | 2026-02-02 00:53:31 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:31.658326 | orchestrator | 2026-02-02 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:34.685009 | orchestrator | 2026-02-02 00:53:34 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:34.686485 | orchestrator | 2026-02-02 00:53:34 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:34.686595 | orchestrator | 2026-02-02 00:53:34 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:34.686608 | orchestrator | 2026-02-02 00:53:34 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:34.686617 | orchestrator | 2026-02-02 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:37.715434 | orchestrator | 2026-02-02 00:53:37 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:37.716008 | orchestrator | 2026-02-02 00:53:37 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:37.716822 | orchestrator | 2026-02-02 00:53:37 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:37.717722 | orchestrator | 2026-02-02 00:53:37 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:37.717753 | orchestrator | 2026-02-02 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:40.745994 | orchestrator | 2026-02-02 00:53:40 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:40.746687 | orchestrator | 2026-02-02 00:53:40 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:40.747124 | orchestrator | 2026-02-02 00:53:40 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:40.747658 | orchestrator | 2026-02-02 00:53:40 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:40.747691 | orchestrator | 2026-02-02 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:43.771350 | orchestrator | 2026-02-02 00:53:43 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:43.773663 | orchestrator | 2026-02-02 00:53:43 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:43.775694 | orchestrator | 2026-02-02 00:53:43 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:43.777583 | orchestrator | 2026-02-02 00:53:43 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:43.777854 | orchestrator | 2026-02-02 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:46.822899 | orchestrator | 2026-02-02 00:53:46 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:46.825712 | orchestrator | 2026-02-02 00:53:46 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:46.828627 | orchestrator | 2026-02-02 00:53:46 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:46.830903 | orchestrator | 2026-02-02 00:53:46 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:46.830945 | orchestrator | 2026-02-02 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:49.875728 | orchestrator | 2026-02-02 00:53:49 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:49.878765 | orchestrator | 2026-02-02 00:53:49 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:49.881652 | orchestrator | 2026-02-02 00:53:49 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:49.882724 | orchestrator | 2026-02-02 00:53:49 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:49.882794 | orchestrator | 2026-02-02 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:53.099012 | orchestrator | 2026-02-02 00:53:53 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:53.099885 | orchestrator | 2026-02-02 00:53:53 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:53.101266 | orchestrator | 2026-02-02 00:53:53 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:53.102206 | orchestrator | 2026-02-02 00:53:53 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:53.102251 | orchestrator | 2026-02-02 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:56.162592 | orchestrator | 2026-02-02 00:53:56 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state STARTED 2026-02-02 00:53:56.164213 | orchestrator | 2026-02-02 00:53:56 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:56.166821 | orchestrator | 2026-02-02 00:53:56 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:56.168243 | orchestrator | 2026-02-02 00:53:56 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:56.168534 | orchestrator | 2026-02-02 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:53:59.215313 | orchestrator | 2026-02-02 00:53:59.215386 | orchestrator | 2026-02-02 00:53:59.215398 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-02 00:53:59.215408 | orchestrator | 2026-02-02 00:53:59.215418 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-02 00:53:59.215464 | orchestrator | Monday 02 February 2026 00:52:18 +0000 (0:00:00.162) 0:00:00.162 ******* 2026-02-02 00:53:59.215494 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-02 00:53:59.215504 | orchestrator | 2026-02-02 00:53:59.215541 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-02 00:53:59.215551 | orchestrator | Monday 02 February 2026 00:52:19 +0000 (0:00:01.083) 0:00:01.245 ******* 2026-02-02 00:53:59.215560 | orchestrator | changed: [testbed-manager] 2026-02-02 00:53:59.215590 | orchestrator | 2026-02-02 00:53:59.215601 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-02 00:53:59.215631 | orchestrator | Monday 02 February 2026 00:52:21 +0000 (0:00:01.556) 0:00:02.802 ******* 2026-02-02 00:53:59.215640 | orchestrator | changed: [testbed-manager] 2026-02-02 00:53:59.215649 | orchestrator | 2026-02-02 00:53:59.215683 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:53:59.215693 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:53:59.215703 | orchestrator | 2026-02-02 00:53:59.215712 | orchestrator | 2026-02-02 00:53:59.215721 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:53:59.215729 | orchestrator | Monday 02 February 2026 00:52:22 +0000 (0:00:00.526) 0:00:03.328 ******* 2026-02-02 00:53:59.215738 | orchestrator | =============================================================================== 2026-02-02 00:53:59.215747 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.56s 2026-02-02 00:53:59.215755 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.08s 2026-02-02 00:53:59.215764 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.53s 2026-02-02 00:53:59.215773 | orchestrator | 2026-02-02 00:53:59.215781 | orchestrator | 2026-02-02 00:53:59.215790 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-02 00:53:59.215799 | orchestrator | 2026-02-02 00:53:59.215808 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-02 00:53:59.215816 | orchestrator | Monday 02 February 2026 00:52:18 +0000 (0:00:00.173) 0:00:00.173 ******* 2026-02-02 00:53:59.215886 | orchestrator | ok: [testbed-manager] 2026-02-02 00:53:59.215898 | orchestrator | 2026-02-02 00:53:59.215908 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-02 00:53:59.215918 | orchestrator | Monday 02 February 2026 00:52:19 +0000 (0:00:00.705) 0:00:00.878 ******* 2026-02-02 00:53:59.215928 | orchestrator | ok: [testbed-manager] 2026-02-02 00:53:59.215937 | orchestrator | 2026-02-02 00:53:59.215948 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-02 00:53:59.215958 | orchestrator | Monday 02 February 2026 00:52:20 +0000 (0:00:00.688) 0:00:01.567 ******* 2026-02-02 00:53:59.215968 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-02 00:53:59.215978 | orchestrator | 2026-02-02 00:53:59.215988 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-02 00:53:59.215998 | orchestrator | Monday 02 February 2026 00:52:20 +0000 (0:00:00.795) 0:00:02.362 ******* 2026-02-02 00:53:59.216009 | orchestrator | changed: [testbed-manager] 2026-02-02 00:53:59.216019 | orchestrator | 2026-02-02 00:53:59.216029 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-02 00:53:59.216039 | orchestrator | Monday 02 February 2026 00:52:22 +0000 (0:00:02.053) 0:00:04.415 ******* 2026-02-02 00:53:59.216049 | orchestrator | changed: [testbed-manager] 2026-02-02 00:53:59.216058 | orchestrator | 2026-02-02 00:53:59.216066 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-02 00:53:59.216075 | orchestrator | Monday 02 February 2026 00:52:23 +0000 (0:00:00.581) 0:00:04.997 ******* 2026-02-02 00:53:59.216113 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-02 00:53:59.216123 | orchestrator | 2026-02-02 00:53:59.216132 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-02 00:53:59.216141 | orchestrator | Monday 02 February 2026 00:52:25 +0000 (0:00:02.136) 0:00:07.133 ******* 2026-02-02 00:53:59.216150 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-02 00:53:59.216159 | orchestrator | 2026-02-02 00:53:59.216252 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-02 00:53:59.216271 | orchestrator | Monday 02 February 2026 00:52:26 +0000 (0:00:00.948) 0:00:08.082 ******* 2026-02-02 00:53:59.216280 | orchestrator | ok: [testbed-manager] 2026-02-02 00:53:59.216289 | orchestrator | 2026-02-02 00:53:59.216298 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-02 00:53:59.216314 | orchestrator | Monday 02 February 2026 00:52:27 +0000 (0:00:00.509) 0:00:08.592 ******* 2026-02-02 00:53:59.216323 | orchestrator | ok: [testbed-manager] 2026-02-02 00:53:59.216331 | orchestrator | 2026-02-02 00:53:59.216340 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:53:59.216349 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 00:53:59.216358 | orchestrator | 2026-02-02 00:53:59.216367 | orchestrator | 2026-02-02 00:53:59.216402 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:53:59.216411 | orchestrator | Monday 02 February 2026 00:52:27 +0000 (0:00:00.358) 0:00:08.950 ******* 2026-02-02 00:53:59.216420 | orchestrator | =============================================================================== 2026-02-02 00:53:59.216428 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.14s 2026-02-02 00:53:59.216437 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.05s 2026-02-02 00:53:59.216446 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.95s 2026-02-02 00:53:59.216486 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.79s 2026-02-02 00:53:59.216496 | orchestrator | Get home directory of operator user ------------------------------------- 0.71s 2026-02-02 00:53:59.216505 | orchestrator | Create .kube directory -------------------------------------------------- 0.69s 2026-02-02 00:53:59.216513 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.58s 2026-02-02 00:53:59.216527 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.51s 2026-02-02 00:53:59.216536 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.36s 2026-02-02 00:53:59.216545 | orchestrator | 2026-02-02 00:53:59.216554 | orchestrator | 2026-02-02 00:53:59 | INFO  | Task a250c898-a1b5-4f3a-8cdf-058f0a6d855d is in state SUCCESS 2026-02-02 00:53:59.216869 | orchestrator | 2026-02-02 00:53:59.216889 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-02-02 00:53:59.216899 | orchestrator | 2026-02-02 00:53:59.216907 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-02 00:53:59.216917 | orchestrator | Monday 02 February 2026 00:50:43 +0000 (0:00:00.149) 0:00:00.149 ******* 2026-02-02 00:53:59.216926 | orchestrator | ok: [localhost] => { 2026-02-02 00:53:59.216935 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-02-02 00:53:59.216944 | orchestrator | } 2026-02-02 00:53:59.216953 | orchestrator | 2026-02-02 00:53:59.216962 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-02-02 00:53:59.216971 | orchestrator | Monday 02 February 2026 00:50:43 +0000 (0:00:00.077) 0:00:00.226 ******* 2026-02-02 00:53:59.216980 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-02-02 00:53:59.216990 | orchestrator | ...ignoring 2026-02-02 00:53:59.216999 | orchestrator | 2026-02-02 00:53:59.217007 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-02-02 00:53:59.217016 | orchestrator | Monday 02 February 2026 00:50:47 +0000 (0:00:03.674) 0:00:03.901 ******* 2026-02-02 00:53:59.217025 | orchestrator | skipping: [localhost] 2026-02-02 00:53:59.217034 | orchestrator | 2026-02-02 00:53:59.217042 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-02-02 00:53:59.217052 | orchestrator | Monday 02 February 2026 00:50:47 +0000 (0:00:00.121) 0:00:04.022 ******* 2026-02-02 00:53:59.217060 | orchestrator | ok: [localhost] 2026-02-02 00:53:59.217069 | orchestrator | 2026-02-02 00:53:59.217078 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 00:53:59.217087 | orchestrator | 2026-02-02 00:53:59.217096 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 00:53:59.217113 | orchestrator | Monday 02 February 2026 00:50:48 +0000 (0:00:00.479) 0:00:04.501 ******* 2026-02-02 00:53:59.217122 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:53:59.217131 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:53:59.217140 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:53:59.217149 | orchestrator | 2026-02-02 00:53:59.217157 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 00:53:59.217166 | orchestrator | Monday 02 February 2026 00:50:49 +0000 (0:00:01.608) 0:00:06.110 ******* 2026-02-02 00:53:59.217175 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-02 00:53:59.217183 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-02 00:53:59.217192 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-02 00:53:59.217201 | orchestrator | 2026-02-02 00:53:59.217209 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-02 00:53:59.217218 | orchestrator | 2026-02-02 00:53:59.217227 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-02 00:53:59.217236 | orchestrator | Monday 02 February 2026 00:50:51 +0000 (0:00:01.561) 0:00:07.671 ******* 2026-02-02 00:53:59.217244 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:53:59.217253 | orchestrator | 2026-02-02 00:53:59.217262 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-02 00:53:59.217331 | orchestrator | Monday 02 February 2026 00:50:52 +0000 (0:00:00.708) 0:00:08.380 ******* 2026-02-02 00:53:59.217343 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:53:59.217352 | orchestrator | 2026-02-02 00:53:59.217360 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-02 00:53:59.217369 | orchestrator | Monday 02 February 2026 00:50:53 +0000 (0:00:01.084) 0:00:09.464 ******* 2026-02-02 00:53:59.217378 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:53:59.217387 | orchestrator | 2026-02-02 00:53:59.217396 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-02 00:53:59.217404 | orchestrator | Monday 02 February 2026 00:50:53 +0000 (0:00:00.388) 0:00:09.853 ******* 2026-02-02 00:53:59.217413 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:53:59.217422 | orchestrator | 2026-02-02 00:53:59.217431 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-02 00:53:59.217439 | orchestrator | Monday 02 February 2026 00:50:54 +0000 (0:00:00.769) 0:00:10.622 ******* 2026-02-02 00:53:59.217448 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:53:59.217457 | orchestrator | 2026-02-02 00:53:59.217466 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-02 00:53:59.217499 | orchestrator | Monday 02 February 2026 00:50:54 +0000 (0:00:00.382) 0:00:11.005 ******* 2026-02-02 00:53:59.217515 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:53:59.217529 | orchestrator | 2026-02-02 00:53:59.217539 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-02 00:53:59.217550 | orchestrator | Monday 02 February 2026 00:50:55 +0000 (0:00:00.801) 0:00:11.806 ******* 2026-02-02 00:53:59.217560 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:53:59.217570 | orchestrator | 2026-02-02 00:53:59.217580 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-02 00:53:59.217588 | orchestrator | Monday 02 February 2026 00:50:56 +0000 (0:00:00.618) 0:00:12.425 ******* 2026-02-02 00:53:59.217597 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:53:59.217606 | orchestrator | 2026-02-02 00:53:59.217615 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-02 00:53:59.217630 | orchestrator | Monday 02 February 2026 00:50:57 +0000 (0:00:00.969) 0:00:13.394 ******* 2026-02-02 00:53:59.217639 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:53:59.217648 | orchestrator | 2026-02-02 00:53:59.217657 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-02 00:53:59.217672 | orchestrator | Monday 02 February 2026 00:50:57 +0000 (0:00:00.715) 0:00:14.110 ******* 2026-02-02 00:53:59.217681 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:53:59.217690 | orchestrator | 2026-02-02 00:53:59.217712 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-02 00:53:59.217721 | orchestrator | Monday 02 February 2026 00:50:58 +0000 (0:00:00.741) 0:00:14.852 ******* 2026-02-02 00:53:59.217734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:53:59.217747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:53:59.217759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:53:59.217769 | orchestrator | 2026-02-02 00:53:59.217778 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-02 00:53:59.217787 | orchestrator | Monday 02 February 2026 00:51:00 +0000 (0:00:02.387) 0:00:17.239 ******* 2026-02-02 00:53:59.217806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:53:59.217824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:53:59.217834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:53:59.217844 | orchestrator | 2026-02-02 00:53:59.217854 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-02 00:53:59.217863 | orchestrator | Monday 02 February 2026 00:51:02 +0000 (0:00:01.672) 0:00:18.911 ******* 2026-02-02 00:53:59.217872 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-02 00:53:59.217881 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-02 00:53:59.217890 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-02 00:53:59.217899 | orchestrator | 2026-02-02 00:53:59.217908 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-02 00:53:59.217916 | orchestrator | Monday 02 February 2026 00:51:03 +0000 (0:00:01.354) 0:00:20.265 ******* 2026-02-02 00:53:59.217925 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-02 00:53:59.217938 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-02 00:53:59.217973 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-02 00:53:59.217983 | orchestrator | 2026-02-02 00:53:59.217992 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-02 00:53:59.218001 | orchestrator | Monday 02 February 2026 00:51:05 +0000 (0:00:01.641) 0:00:21.907 ******* 2026-02-02 00:53:59.218010 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-02 00:53:59.218075 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-02 00:53:59.218092 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-02 00:53:59.218101 | orchestrator | 2026-02-02 00:53:59.218110 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-02 00:53:59.218119 | orchestrator | Monday 02 February 2026 00:51:07 +0000 (0:00:01.524) 0:00:23.431 ******* 2026-02-02 00:53:59.218134 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-02 00:53:59.218144 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-02 00:53:59.218153 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-02 00:53:59.218162 | orchestrator | 2026-02-02 00:53:59.218170 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-02 00:53:59.218179 | orchestrator | Monday 02 February 2026 00:51:09 +0000 (0:00:02.576) 0:00:26.008 ******* 2026-02-02 00:53:59.218188 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-02 00:53:59.218196 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-02 00:53:59.218205 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-02 00:53:59.218214 | orchestrator | 2026-02-02 00:53:59.218223 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-02 00:53:59.218232 | orchestrator | Monday 02 February 2026 00:51:11 +0000 (0:00:01.635) 0:00:27.643 ******* 2026-02-02 00:53:59.218240 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-02 00:53:59.218249 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-02 00:53:59.218258 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-02 00:53:59.218267 | orchestrator | 2026-02-02 00:53:59.218275 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-02 00:53:59.218284 | orchestrator | Monday 02 February 2026 00:51:12 +0000 (0:00:01.562) 0:00:29.206 ******* 2026-02-02 00:53:59.218293 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:53:59.218302 | orchestrator | 2026-02-02 00:53:59.218314 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-02 00:53:59.218329 | orchestrator | Monday 02 February 2026 00:51:13 +0000 (0:00:00.832) 0:00:30.039 ******* 2026-02-02 00:53:59.218347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:53:59.218374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:53:59.218399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:53:59.218410 | orchestrator | 2026-02-02 00:53:59.218419 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-02 00:53:59.218428 | orchestrator | Monday 02 February 2026 00:51:14 +0000 (0:00:01.085) 0:00:31.124 ******* 2026-02-02 00:53:59.218437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 00:53:59.218447 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:53:59.218457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 00:53:59.218527 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:53:59.218552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 00:53:59.218563 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:53:59.218573 | orchestrator | 2026-02-02 00:53:59.218581 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-02 00:53:59.218590 | orchestrator | Monday 02 February 2026 00:51:15 +0000 (0:00:00.747) 0:00:31.872 ******* 2026-02-02 00:53:59.218602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 00:53:59.218618 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:53:59.218633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 00:53:59.218662 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:53:59.218677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 00:53:59.218724 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:53:59.218739 | orchestrator | 2026-02-02 00:53:59.218756 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-02 00:53:59.218771 | orchestrator | Monday 02 February 2026 00:51:16 +0000 (0:00:00.787) 0:00:32.660 ******* 2026-02-02 00:53:59.218799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:53:59.218811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:53:59.218828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:53:59.218838 | orchestrator | 2026-02-02 00:53:59.218847 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-02 00:53:59.218856 | orchestrator | Monday 02 February 2026 00:51:17 +0000 (0:00:00.974) 0:00:33.634 ******* 2026-02-02 00:53:59.218865 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 00:53:59.218873 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:53:59.218882 | orchestrator | } 2026-02-02 00:53:59.218891 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 00:53:59.218900 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:53:59.218908 | orchestrator | } 2026-02-02 00:53:59.218917 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 00:53:59.218926 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:53:59.218934 | orchestrator | } 2026-02-02 00:53:59.218943 | orchestrator | 2026-02-02 00:53:59.218952 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 00:53:59.218961 | orchestrator | Monday 02 February 2026 00:51:17 +0000 (0:00:00.366) 0:00:34.001 ******* 2026-02-02 00:53:59.218980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 00:53:59.218991 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:53:59.219000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 00:53:59.219015 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:53:59.219025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 00:53:59.219035 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:53:59.219044 | orchestrator | 2026-02-02 00:53:59.219052 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-02 00:53:59.219061 | orchestrator | Monday 02 February 2026 00:51:18 +0000 (0:00:01.132) 0:00:35.133 ******* 2026-02-02 00:53:59.219070 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:53:59.219079 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:53:59.219088 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:53:59.219096 | orchestrator | 2026-02-02 00:53:59.219105 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-02 00:53:59.219114 | orchestrator | Monday 02 February 2026 00:51:20 +0000 (0:00:01.187) 0:00:36.321 ******* 2026-02-02 00:53:59.219123 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:53:59.219132 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:53:59.219141 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:53:59.219149 | orchestrator | 2026-02-02 00:53:59.219158 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-02 00:53:59.219167 | orchestrator | Monday 02 February 2026 00:51:28 +0000 (0:00:08.034) 0:00:44.355 ******* 2026-02-02 00:53:59.219176 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:53:59.219185 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:53:59.219193 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:53:59.219202 | orchestrator | 2026-02-02 00:53:59.219211 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-02 00:53:59.219220 | orchestrator | 2026-02-02 00:53:59.219229 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-02 00:53:59.219237 | orchestrator | Monday 02 February 2026 00:51:28 +0000 (0:00:00.664) 0:00:45.020 ******* 2026-02-02 00:53:59.219246 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:53:59.219255 | orchestrator | 2026-02-02 00:53:59.219264 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-02 00:53:59.219276 | orchestrator | Monday 02 February 2026 00:51:29 +0000 (0:00:00.663) 0:00:45.683 ******* 2026-02-02 00:53:59.219285 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:53:59.219293 | orchestrator | 2026-02-02 00:53:59.219302 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-02 00:53:59.219311 | orchestrator | Monday 02 February 2026 00:51:29 +0000 (0:00:00.186) 0:00:45.869 ******* 2026-02-02 00:53:59.219320 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:53:59.219333 | orchestrator | 2026-02-02 00:53:59.219347 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-02 00:53:59.219356 | orchestrator | Monday 02 February 2026 00:51:31 +0000 (0:00:01.791) 0:00:47.661 ******* 2026-02-02 00:53:59.219365 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:53:59.219374 | orchestrator | 2026-02-02 00:53:59.219395 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-02 00:53:59.219404 | orchestrator | 2026-02-02 00:53:59.219413 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-02 00:53:59.219422 | orchestrator | Monday 02 February 2026 00:53:24 +0000 (0:01:52.989) 0:02:40.650 ******* 2026-02-02 00:53:59.219439 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:53:59.219448 | orchestrator | 2026-02-02 00:53:59.219457 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-02 00:53:59.219465 | orchestrator | Monday 02 February 2026 00:53:25 +0000 (0:00:00.665) 0:02:41.316 ******* 2026-02-02 00:53:59.219498 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:53:59.219514 | orchestrator | 2026-02-02 00:53:59.219529 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-02 00:53:59.219545 | orchestrator | Monday 02 February 2026 00:53:25 +0000 (0:00:00.297) 0:02:41.614 ******* 2026-02-02 00:53:59.219556 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:53:59.219564 | orchestrator | 2026-02-02 00:53:59.219573 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-02 00:53:59.219582 | orchestrator | Monday 02 February 2026 00:53:32 +0000 (0:00:06.912) 0:02:48.527 ******* 2026-02-02 00:53:59.219591 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:53:59.219600 | orchestrator | 2026-02-02 00:53:59.219608 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-02 00:53:59.219617 | orchestrator | 2026-02-02 00:53:59.219626 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-02 00:53:59.219635 | orchestrator | Monday 02 February 2026 00:53:38 +0000 (0:00:06.291) 0:02:54.819 ******* 2026-02-02 00:53:59.219643 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:53:59.219652 | orchestrator | 2026-02-02 00:53:59.219661 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-02 00:53:59.219669 | orchestrator | Monday 02 February 2026 00:53:39 +0000 (0:00:01.180) 0:02:55.999 ******* 2026-02-02 00:53:59.219678 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:53:59.219687 | orchestrator | 2026-02-02 00:53:59.219696 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-02 00:53:59.219704 | orchestrator | Monday 02 February 2026 00:53:39 +0000 (0:00:00.127) 0:02:56.127 ******* 2026-02-02 00:53:59.219713 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:53:59.219722 | orchestrator | 2026-02-02 00:53:59.219730 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-02 00:53:59.219739 | orchestrator | Monday 02 February 2026 00:53:41 +0000 (0:00:01.619) 0:02:57.746 ******* 2026-02-02 00:53:59.219748 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:53:59.219757 | orchestrator | 2026-02-02 00:53:59.219766 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-02 00:53:59.219774 | orchestrator | 2026-02-02 00:53:59.219783 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-02 00:53:59.219792 | orchestrator | Monday 02 February 2026 00:53:52 +0000 (0:00:11.172) 0:03:08.918 ******* 2026-02-02 00:53:59.219801 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:53:59.219810 | orchestrator | 2026-02-02 00:53:59.219819 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-02 00:53:59.219827 | orchestrator | Monday 02 February 2026 00:53:53 +0000 (0:00:01.195) 0:03:10.114 ******* 2026-02-02 00:53:59.219836 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:53:59.219845 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:53:59.219854 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:53:59.219862 | orchestrator | 2026-02-02 00:53:59.219877 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:53:59.219887 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-02 00:53:59.219896 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-02-02 00:53:59.219905 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 00:53:59.219914 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 00:53:59.219923 | orchestrator | 2026-02-02 00:53:59.219932 | orchestrator | 2026-02-02 00:53:59.219940 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:53:59.219949 | orchestrator | Monday 02 February 2026 00:53:56 +0000 (0:00:02.646) 0:03:12.760 ******* 2026-02-02 00:53:59.219958 | orchestrator | =============================================================================== 2026-02-02 00:53:59.219967 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 130.46s 2026-02-02 00:53:59.219975 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.32s 2026-02-02 00:53:59.219984 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.03s 2026-02-02 00:53:59.219993 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.67s 2026-02-02 00:53:59.220005 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.65s 2026-02-02 00:53:59.220014 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.58s 2026-02-02 00:53:59.220028 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.51s 2026-02-02 00:53:59.220050 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.39s 2026-02-02 00:53:59.220066 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.67s 2026-02-02 00:53:59.220144 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.64s 2026-02-02 00:53:59.220158 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.64s 2026-02-02 00:53:59.220167 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.61s 2026-02-02 00:53:59.220176 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.56s 2026-02-02 00:53:59.220185 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.56s 2026-02-02 00:53:59.220193 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.52s 2026-02-02 00:53:59.220202 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.35s 2026-02-02 00:53:59.220211 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.19s 2026-02-02 00:53:59.220220 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.19s 2026-02-02 00:53:59.220229 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.13s 2026-02-02 00:53:59.220237 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.09s 2026-02-02 00:53:59.220246 | orchestrator | 2026-02-02 00:53:59 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:53:59.220371 | orchestrator | 2026-02-02 00:53:59 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:53:59.220384 | orchestrator | 2026-02-02 00:53:59 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:53:59.220393 | orchestrator | 2026-02-02 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:02.258584 | orchestrator | 2026-02-02 00:54:02 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:02.260767 | orchestrator | 2026-02-02 00:54:02 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:02.263015 | orchestrator | 2026-02-02 00:54:02 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:02.263181 | orchestrator | 2026-02-02 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:05.309649 | orchestrator | 2026-02-02 00:54:05 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:05.311901 | orchestrator | 2026-02-02 00:54:05 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:05.314243 | orchestrator | 2026-02-02 00:54:05 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:05.314287 | orchestrator | 2026-02-02 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:08.358791 | orchestrator | 2026-02-02 00:54:08 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:08.360332 | orchestrator | 2026-02-02 00:54:08 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:08.363708 | orchestrator | 2026-02-02 00:54:08 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:08.363768 | orchestrator | 2026-02-02 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:11.399606 | orchestrator | 2026-02-02 00:54:11 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:11.399880 | orchestrator | 2026-02-02 00:54:11 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:11.401901 | orchestrator | 2026-02-02 00:54:11 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:11.401937 | orchestrator | 2026-02-02 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:14.428673 | orchestrator | 2026-02-02 00:54:14 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:14.430289 | orchestrator | 2026-02-02 00:54:14 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:14.431283 | orchestrator | 2026-02-02 00:54:14 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:14.431352 | orchestrator | 2026-02-02 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:17.469015 | orchestrator | 2026-02-02 00:54:17 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:17.470308 | orchestrator | 2026-02-02 00:54:17 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:17.471687 | orchestrator | 2026-02-02 00:54:17 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:17.471729 | orchestrator | 2026-02-02 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:20.501576 | orchestrator | 2026-02-02 00:54:20 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:20.503881 | orchestrator | 2026-02-02 00:54:20 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:20.505940 | orchestrator | 2026-02-02 00:54:20 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:20.506380 | orchestrator | 2026-02-02 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:23.536107 | orchestrator | 2026-02-02 00:54:23 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:23.537085 | orchestrator | 2026-02-02 00:54:23 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:23.538194 | orchestrator | 2026-02-02 00:54:23 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:23.538248 | orchestrator | 2026-02-02 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:26.577684 | orchestrator | 2026-02-02 00:54:26 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:26.577956 | orchestrator | 2026-02-02 00:54:26 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:26.578820 | orchestrator | 2026-02-02 00:54:26 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:26.578846 | orchestrator | 2026-02-02 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:29.613005 | orchestrator | 2026-02-02 00:54:29 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:29.613088 | orchestrator | 2026-02-02 00:54:29 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:29.616802 | orchestrator | 2026-02-02 00:54:29 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:29.616859 | orchestrator | 2026-02-02 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:32.650138 | orchestrator | 2026-02-02 00:54:32 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:32.651176 | orchestrator | 2026-02-02 00:54:32 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:32.654968 | orchestrator | 2026-02-02 00:54:32 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:32.655018 | orchestrator | 2026-02-02 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:35.697573 | orchestrator | 2026-02-02 00:54:35 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:35.701785 | orchestrator | 2026-02-02 00:54:35 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:35.709878 | orchestrator | 2026-02-02 00:54:35 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:35.709993 | orchestrator | 2026-02-02 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:38.741951 | orchestrator | 2026-02-02 00:54:38 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:38.742122 | orchestrator | 2026-02-02 00:54:38 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:38.742141 | orchestrator | 2026-02-02 00:54:38 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:38.742154 | orchestrator | 2026-02-02 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:41.798865 | orchestrator | 2026-02-02 00:54:41 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:41.801596 | orchestrator | 2026-02-02 00:54:41 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:41.803148 | orchestrator | 2026-02-02 00:54:41 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:41.803602 | orchestrator | 2026-02-02 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:44.847120 | orchestrator | 2026-02-02 00:54:44 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:44.847318 | orchestrator | 2026-02-02 00:54:44 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:44.848625 | orchestrator | 2026-02-02 00:54:44 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:44.848678 | orchestrator | 2026-02-02 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:47.891965 | orchestrator | 2026-02-02 00:54:47 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:47.892570 | orchestrator | 2026-02-02 00:54:47 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:47.893858 | orchestrator | 2026-02-02 00:54:47 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:47.893896 | orchestrator | 2026-02-02 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:50.935188 | orchestrator | 2026-02-02 00:54:50 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:50.935680 | orchestrator | 2026-02-02 00:54:50 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:50.936679 | orchestrator | 2026-02-02 00:54:50 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:50.936722 | orchestrator | 2026-02-02 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:53.984483 | orchestrator | 2026-02-02 00:54:53 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:53.986984 | orchestrator | 2026-02-02 00:54:53 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:53.988735 | orchestrator | 2026-02-02 00:54:53 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:53.988797 | orchestrator | 2026-02-02 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:54:57.026492 | orchestrator | 2026-02-02 00:54:57 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:54:57.026617 | orchestrator | 2026-02-02 00:54:57 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:54:57.027806 | orchestrator | 2026-02-02 00:54:57 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:54:57.027865 | orchestrator | 2026-02-02 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:00.083639 | orchestrator | 2026-02-02 00:55:00 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:00.083766 | orchestrator | 2026-02-02 00:55:00 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:00.084958 | orchestrator | 2026-02-02 00:55:00 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:55:00.084987 | orchestrator | 2026-02-02 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:03.170214 | orchestrator | 2026-02-02 00:55:03 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:03.170291 | orchestrator | 2026-02-02 00:55:03 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:03.170299 | orchestrator | 2026-02-02 00:55:03 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:55:03.170307 | orchestrator | 2026-02-02 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:06.171052 | orchestrator | 2026-02-02 00:55:06 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:06.171686 | orchestrator | 2026-02-02 00:55:06 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:06.172616 | orchestrator | 2026-02-02 00:55:06 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:55:06.172705 | orchestrator | 2026-02-02 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:09.198599 | orchestrator | 2026-02-02 00:55:09 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:09.199551 | orchestrator | 2026-02-02 00:55:09 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:09.200898 | orchestrator | 2026-02-02 00:55:09 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:55:09.201167 | orchestrator | 2026-02-02 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:12.241115 | orchestrator | 2026-02-02 00:55:12 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:12.243884 | orchestrator | 2026-02-02 00:55:12 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:12.245432 | orchestrator | 2026-02-02 00:55:12 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:55:12.245469 | orchestrator | 2026-02-02 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:15.289363 | orchestrator | 2026-02-02 00:55:15 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:15.291485 | orchestrator | 2026-02-02 00:55:15 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:15.294313 | orchestrator | 2026-02-02 00:55:15 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:55:15.294353 | orchestrator | 2026-02-02 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:18.332491 | orchestrator | 2026-02-02 00:55:18 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:18.333619 | orchestrator | 2026-02-02 00:55:18 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:18.335228 | orchestrator | 2026-02-02 00:55:18 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:55:18.335312 | orchestrator | 2026-02-02 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:21.368802 | orchestrator | 2026-02-02 00:55:21 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:21.368907 | orchestrator | 2026-02-02 00:55:21 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:21.369500 | orchestrator | 2026-02-02 00:55:21 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state STARTED 2026-02-02 00:55:21.369515 | orchestrator | 2026-02-02 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:24.395726 | orchestrator | 2026-02-02 00:55:24 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:24.397689 | orchestrator | 2026-02-02 00:55:24 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:24.402654 | orchestrator | 2026-02-02 00:55:24 | INFO  | Task 23442f72-38ac-47ab-a81b-8ddde0e19ce1 is in state SUCCESS 2026-02-02 00:55:24.402835 | orchestrator | 2026-02-02 00:55:24.405138 | orchestrator | 2026-02-02 00:55:24.405194 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 00:55:24.405208 | orchestrator | 2026-02-02 00:55:24.405219 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 00:55:24.405231 | orchestrator | Monday 02 February 2026 00:51:36 +0000 (0:00:00.197) 0:00:00.197 ******* 2026-02-02 00:55:24.405279 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:55:24.405290 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:55:24.405302 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:55:24.405338 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.405349 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.405360 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.405398 | orchestrator | 2026-02-02 00:55:24.405419 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 00:55:24.405432 | orchestrator | Monday 02 February 2026 00:51:36 +0000 (0:00:00.839) 0:00:01.037 ******* 2026-02-02 00:55:24.405471 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-02 00:55:24.405484 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-02 00:55:24.405495 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-02 00:55:24.405506 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-02 00:55:24.405517 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-02 00:55:24.405528 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-02 00:55:24.405539 | orchestrator | 2026-02-02 00:55:24.405550 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-02 00:55:24.405561 | orchestrator | 2026-02-02 00:55:24.405572 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-02 00:55:24.405582 | orchestrator | Monday 02 February 2026 00:51:37 +0000 (0:00:01.000) 0:00:02.037 ******* 2026-02-02 00:55:24.405594 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:55:24.405606 | orchestrator | 2026-02-02 00:55:24.405617 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-02 00:55:24.405628 | orchestrator | Monday 02 February 2026 00:51:39 +0000 (0:00:01.134) 0:00:03.171 ******* 2026-02-02 00:55:24.405640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.405755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.405770 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.405781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.405819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.405850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.405869 | orchestrator | 2026-02-02 00:55:24.405905 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-02 00:55:24.405923 | orchestrator | Monday 02 February 2026 00:51:40 +0000 (0:00:01.287) 0:00:04.459 ******* 2026-02-02 00:55:24.405940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.405958 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.405976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.405993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406124 | orchestrator | 2026-02-02 00:55:24.406144 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-02 00:55:24.406165 | orchestrator | Monday 02 February 2026 00:51:42 +0000 (0:00:02.112) 0:00:06.572 ******* 2026-02-02 00:55:24.406186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406221 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406279 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406341 | orchestrator | 2026-02-02 00:55:24.406361 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-02 00:55:24.406404 | orchestrator | Monday 02 February 2026 00:51:44 +0000 (0:00:01.971) 0:00:08.543 ******* 2026-02-02 00:55:24.406424 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406498 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406593 | orchestrator | 2026-02-02 00:55:24.406623 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-02 00:55:24.406643 | orchestrator | Monday 02 February 2026 00:51:46 +0000 (0:00:01.920) 0:00:10.464 ******* 2026-02-02 00:55:24.406662 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406702 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.406799 | orchestrator | 2026-02-02 00:55:24.406818 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-02 00:55:24.406836 | orchestrator | Monday 02 February 2026 00:51:48 +0000 (0:00:02.160) 0:00:12.625 ******* 2026-02-02 00:55:24.406855 | orchestrator | changed: [testbed-node-3] => { 2026-02-02 00:55:24.406875 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.406894 | orchestrator | } 2026-02-02 00:55:24.406914 | orchestrator | changed: [testbed-node-4] => { 2026-02-02 00:55:24.406932 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.406949 | orchestrator | } 2026-02-02 00:55:24.406968 | orchestrator | changed: [testbed-node-5] => { 2026-02-02 00:55:24.406987 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.407006 | orchestrator | } 2026-02-02 00:55:24.407025 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 00:55:24.407043 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.407060 | orchestrator | } 2026-02-02 00:55:24.407079 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 00:55:24.407098 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.407118 | orchestrator | } 2026-02-02 00:55:24.407136 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 00:55:24.407153 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.407171 | orchestrator | } 2026-02-02 00:55:24.407190 | orchestrator | 2026-02-02 00:55:24.407210 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 00:55:24.407229 | orchestrator | Monday 02 February 2026 00:51:49 +0000 (0:00:01.167) 0:00:13.792 ******* 2026-02-02 00:55:24.407248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.407290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.407311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.407330 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:55:24.407347 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:55:24.407365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.407424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.407469 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:55:24.407487 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.407506 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.407533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.407549 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.407560 | orchestrator | 2026-02-02 00:55:24.407571 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-02 00:55:24.407582 | orchestrator | Monday 02 February 2026 00:51:51 +0000 (0:00:01.587) 0:00:15.379 ******* 2026-02-02 00:55:24.407593 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:55:24.407604 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:55:24.407615 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:55:24.407626 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:55:24.407636 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:55:24.407647 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:55:24.407658 | orchestrator | 2026-02-02 00:55:24.407669 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-02 00:55:24.407680 | orchestrator | Monday 02 February 2026 00:51:53 +0000 (0:00:02.551) 0:00:17.931 ******* 2026-02-02 00:55:24.407691 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-02 00:55:24.407702 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-02 00:55:24.407713 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-02 00:55:24.407723 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-02 00:55:24.407734 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-02 00:55:24.407745 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-02 00:55:24.407755 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 00:55:24.407766 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 00:55:24.407777 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 00:55:24.407788 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 00:55:24.407798 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 00:55:24.407809 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-02 00:55:24.407829 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 00:55:24.407841 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-02 00:55:24.407852 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-02 00:55:24.407863 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-02 00:55:24.407874 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 00:55:24.407886 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-02 00:55:24.407904 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-02 00:55:24.407914 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 00:55:24.407926 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 00:55:24.407937 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 00:55:24.407948 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 00:55:24.407958 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 00:55:24.407969 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 00:55:24.407981 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 00:55:24.407992 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 00:55:24.408012 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 00:55:24.408030 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 00:55:24.408056 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 00:55:24.408076 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 00:55:24.408095 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 00:55:24.408111 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 00:55:24.408122 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 00:55:24.408133 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 00:55:24.408144 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 00:55:24.408155 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-02 00:55:24.408166 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-02 00:55:24.408177 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-02 00:55:24.408187 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-02 00:55:24.408198 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-02 00:55:24.408209 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-02 00:55:24.408220 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-02 00:55:24.408231 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-02 00:55:24.408242 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-02 00:55:24.408253 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-02 00:55:24.408272 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-02 00:55:24.408290 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-02 00:55:24.408302 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-02 00:55:24.408313 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-02 00:55:24.408324 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-02 00:55:24.408335 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-02 00:55:24.408346 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-02 00:55:24.408357 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-02 00:55:24.408437 | orchestrator | 2026-02-02 00:55:24.408467 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 00:55:24.408486 | orchestrator | Monday 02 February 2026 00:52:15 +0000 (0:00:21.826) 0:00:39.757 ******* 2026-02-02 00:55:24.408507 | orchestrator | 2026-02-02 00:55:24.408528 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 00:55:24.408547 | orchestrator | Monday 02 February 2026 00:52:16 +0000 (0:00:00.494) 0:00:40.252 ******* 2026-02-02 00:55:24.408567 | orchestrator | 2026-02-02 00:55:24.408587 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 00:55:24.408605 | orchestrator | Monday 02 February 2026 00:52:16 +0000 (0:00:00.281) 0:00:40.534 ******* 2026-02-02 00:55:24.408623 | orchestrator | 2026-02-02 00:55:24.408642 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 00:55:24.408660 | orchestrator | Monday 02 February 2026 00:52:16 +0000 (0:00:00.073) 0:00:40.608 ******* 2026-02-02 00:55:24.408679 | orchestrator | 2026-02-02 00:55:24.408700 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 00:55:24.408720 | orchestrator | Monday 02 February 2026 00:52:16 +0000 (0:00:00.068) 0:00:40.677 ******* 2026-02-02 00:55:24.408738 | orchestrator | 2026-02-02 00:55:24.408760 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 00:55:24.408780 | orchestrator | Monday 02 February 2026 00:52:16 +0000 (0:00:00.073) 0:00:40.750 ******* 2026-02-02 00:55:24.408797 | orchestrator | 2026-02-02 00:55:24.408815 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-02 00:55:24.408849 | orchestrator | Monday 02 February 2026 00:52:16 +0000 (0:00:00.073) 0:00:40.823 ******* 2026-02-02 00:55:24.408869 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:55:24.408888 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:55:24.408906 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.408925 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.408943 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:55:24.408959 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.408973 | orchestrator | 2026-02-02 00:55:24.408983 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-02 00:55:24.408993 | orchestrator | Monday 02 February 2026 00:52:19 +0000 (0:00:02.985) 0:00:43.809 ******* 2026-02-02 00:55:24.409002 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:55:24.409012 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:55:24.409022 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:55:24.409031 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:55:24.409041 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:55:24.409051 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:55:24.409061 | orchestrator | 2026-02-02 00:55:24.409070 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-02 00:55:24.409090 | orchestrator | 2026-02-02 00:55:24.409100 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-02 00:55:24.409110 | orchestrator | Monday 02 February 2026 00:52:28 +0000 (0:00:08.792) 0:00:52.601 ******* 2026-02-02 00:55:24.409119 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:55:24.409129 | orchestrator | 2026-02-02 00:55:24.409139 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-02 00:55:24.409149 | orchestrator | Monday 02 February 2026 00:52:29 +0000 (0:00:00.646) 0:00:53.248 ******* 2026-02-02 00:55:24.409158 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:55:24.409168 | orchestrator | 2026-02-02 00:55:24.409178 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-02 00:55:24.409188 | orchestrator | Monday 02 February 2026 00:52:30 +0000 (0:00:01.025) 0:00:54.273 ******* 2026-02-02 00:55:24.409198 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.409208 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.409218 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.409227 | orchestrator | 2026-02-02 00:55:24.409237 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-02 00:55:24.409248 | orchestrator | Monday 02 February 2026 00:52:31 +0000 (0:00:00.902) 0:00:55.176 ******* 2026-02-02 00:55:24.409265 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.409281 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.409297 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.409332 | orchestrator | 2026-02-02 00:55:24.409349 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-02 00:55:24.409364 | orchestrator | Monday 02 February 2026 00:52:31 +0000 (0:00:00.352) 0:00:55.529 ******* 2026-02-02 00:55:24.409401 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.409418 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.409434 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.409449 | orchestrator | 2026-02-02 00:55:24.409465 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-02 00:55:24.409495 | orchestrator | Monday 02 February 2026 00:52:31 +0000 (0:00:00.595) 0:00:56.124 ******* 2026-02-02 00:55:24.409512 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.409528 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.409544 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.409558 | orchestrator | 2026-02-02 00:55:24.409574 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-02 00:55:24.409590 | orchestrator | Monday 02 February 2026 00:52:32 +0000 (0:00:00.341) 0:00:56.465 ******* 2026-02-02 00:55:24.409606 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.409621 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.409638 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.409654 | orchestrator | 2026-02-02 00:55:24.409671 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-02 00:55:24.409687 | orchestrator | Monday 02 February 2026 00:52:32 +0000 (0:00:00.349) 0:00:56.815 ******* 2026-02-02 00:55:24.409703 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.409720 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.409736 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.409752 | orchestrator | 2026-02-02 00:55:24.409767 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-02 00:55:24.409783 | orchestrator | Monday 02 February 2026 00:52:33 +0000 (0:00:00.333) 0:00:57.148 ******* 2026-02-02 00:55:24.409799 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.409815 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.409830 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.409847 | orchestrator | 2026-02-02 00:55:24.409864 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-02 00:55:24.409881 | orchestrator | Monday 02 February 2026 00:52:33 +0000 (0:00:00.582) 0:00:57.731 ******* 2026-02-02 00:55:24.409913 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.409931 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.409946 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.409962 | orchestrator | 2026-02-02 00:55:24.409979 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-02 00:55:24.409996 | orchestrator | Monday 02 February 2026 00:52:33 +0000 (0:00:00.325) 0:00:58.056 ******* 2026-02-02 00:55:24.410012 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.410182 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.410199 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.410215 | orchestrator | 2026-02-02 00:55:24.410232 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-02 00:55:24.410249 | orchestrator | Monday 02 February 2026 00:52:34 +0000 (0:00:00.306) 0:00:58.363 ******* 2026-02-02 00:55:24.410267 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.410284 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.410300 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.410317 | orchestrator | 2026-02-02 00:55:24.410333 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-02 00:55:24.410351 | orchestrator | Monday 02 February 2026 00:52:34 +0000 (0:00:00.322) 0:00:58.686 ******* 2026-02-02 00:55:24.410399 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.410420 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.410439 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.410457 | orchestrator | 2026-02-02 00:55:24.410476 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-02 00:55:24.410493 | orchestrator | Monday 02 February 2026 00:52:35 +0000 (0:00:00.573) 0:00:59.259 ******* 2026-02-02 00:55:24.410511 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.410530 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.410548 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.410566 | orchestrator | 2026-02-02 00:55:24.410584 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-02 00:55:24.410601 | orchestrator | Monday 02 February 2026 00:52:35 +0000 (0:00:00.330) 0:00:59.590 ******* 2026-02-02 00:55:24.410620 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.410637 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.410655 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.410673 | orchestrator | 2026-02-02 00:55:24.410692 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-02 00:55:24.410709 | orchestrator | Monday 02 February 2026 00:52:35 +0000 (0:00:00.334) 0:00:59.924 ******* 2026-02-02 00:55:24.410728 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.410746 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.410764 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.410781 | orchestrator | 2026-02-02 00:55:24.410799 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-02 00:55:24.410817 | orchestrator | Monday 02 February 2026 00:52:36 +0000 (0:00:00.356) 0:01:00.281 ******* 2026-02-02 00:55:24.410834 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.410852 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.410869 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.410887 | orchestrator | 2026-02-02 00:55:24.410905 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-02 00:55:24.410922 | orchestrator | Monday 02 February 2026 00:52:36 +0000 (0:00:00.354) 0:01:00.635 ******* 2026-02-02 00:55:24.410939 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.410957 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.410975 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.410993 | orchestrator | 2026-02-02 00:55:24.411011 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-02 00:55:24.411028 | orchestrator | Monday 02 February 2026 00:52:37 +0000 (0:00:00.750) 0:01:01.386 ******* 2026-02-02 00:55:24.411058 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.411076 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.411094 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.411112 | orchestrator | 2026-02-02 00:55:24.411130 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-02 00:55:24.411146 | orchestrator | Monday 02 February 2026 00:52:37 +0000 (0:00:00.389) 0:01:01.775 ******* 2026-02-02 00:55:24.411164 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:55:24.411182 | orchestrator | 2026-02-02 00:55:24.411212 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-02 00:55:24.411230 | orchestrator | Monday 02 February 2026 00:52:38 +0000 (0:00:00.670) 0:01:02.446 ******* 2026-02-02 00:55:24.411246 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.411262 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.411279 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.411295 | orchestrator | 2026-02-02 00:55:24.411312 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-02 00:55:24.411328 | orchestrator | Monday 02 February 2026 00:52:39 +0000 (0:00:00.762) 0:01:03.208 ******* 2026-02-02 00:55:24.411344 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.411360 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.411395 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.411412 | orchestrator | 2026-02-02 00:55:24.411428 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-02 00:55:24.411445 | orchestrator | Monday 02 February 2026 00:52:39 +0000 (0:00:00.496) 0:01:03.704 ******* 2026-02-02 00:55:24.411462 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.411478 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.411495 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.411511 | orchestrator | 2026-02-02 00:55:24.411528 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-02 00:55:24.411545 | orchestrator | Monday 02 February 2026 00:52:39 +0000 (0:00:00.361) 0:01:04.066 ******* 2026-02-02 00:55:24.411561 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.411578 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.411594 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.411611 | orchestrator | 2026-02-02 00:55:24.411628 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-02 00:55:24.411645 | orchestrator | Monday 02 February 2026 00:52:40 +0000 (0:00:00.391) 0:01:04.457 ******* 2026-02-02 00:55:24.411661 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.411677 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.411693 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.411710 | orchestrator | 2026-02-02 00:55:24.411726 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-02 00:55:24.411743 | orchestrator | Monday 02 February 2026 00:52:40 +0000 (0:00:00.664) 0:01:05.122 ******* 2026-02-02 00:55:24.411759 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.411775 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.411791 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.411807 | orchestrator | 2026-02-02 00:55:24.411823 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-02 00:55:24.411839 | orchestrator | Monday 02 February 2026 00:52:41 +0000 (0:00:00.440) 0:01:05.563 ******* 2026-02-02 00:55:24.411855 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.411871 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.411887 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.411904 | orchestrator | 2026-02-02 00:55:24.411921 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-02 00:55:24.411944 | orchestrator | Monday 02 February 2026 00:52:41 +0000 (0:00:00.392) 0:01:05.956 ******* 2026-02-02 00:55:24.411961 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.411986 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.412002 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.412019 | orchestrator | 2026-02-02 00:55:24.412035 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-02 00:55:24.412051 | orchestrator | Monday 02 February 2026 00:52:42 +0000 (0:00:00.426) 0:01:06.382 ******* 2026-02-02 00:55:24.412070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.412229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.412280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.412324 | orchestrator | 2026-02-02 00:55:24.412341 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-02 00:55:24.412357 | orchestrator | Monday 02 February 2026 00:52:45 +0000 (0:00:03.674) 0:01:10.057 ******* 2026-02-02 00:55:24.412403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.412565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.412609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.412632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.412649 | orchestrator | 2026-02-02 00:55:24.412667 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-02 00:55:24.412685 | orchestrator | Monday 02 February 2026 00:52:50 +0000 (0:00:04.662) 0:01:14.719 ******* 2026-02-02 00:55:24.412701 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-02 00:55:24.412718 | orchestrator | 2026-02-02 00:55:24.412735 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-02 00:55:24.412751 | orchestrator | Monday 02 February 2026 00:52:51 +0000 (0:00:00.617) 0:01:15.338 ******* 2026-02-02 00:55:24.412768 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:55:24.412783 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:55:24.412800 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:55:24.412815 | orchestrator | 2026-02-02 00:55:24.412832 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-02 00:55:24.412848 | orchestrator | Monday 02 February 2026 00:52:52 +0000 (0:00:00.906) 0:01:16.244 ******* 2026-02-02 00:55:24.412865 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:55:24.412880 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:55:24.412897 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:55:24.412913 | orchestrator | 2026-02-02 00:55:24.412930 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-02 00:55:24.412947 | orchestrator | Monday 02 February 2026 00:52:53 +0000 (0:00:01.739) 0:01:17.984 ******* 2026-02-02 00:55:24.412963 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:55:24.412979 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:55:24.412995 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:55:24.413011 | orchestrator | 2026-02-02 00:55:24.413026 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-02 00:55:24.413042 | orchestrator | Monday 02 February 2026 00:52:55 +0000 (0:00:01.856) 0:01:19.840 ******* 2026-02-02 00:55:24.413067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.413085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.413114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.413131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.413154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.413172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.413188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.413206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.413231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.413248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.413281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.413299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.413316 | orchestrator | 2026-02-02 00:55:24.413331 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-02 00:55:24.413348 | orchestrator | Monday 02 February 2026 00:52:59 +0000 (0:00:03.798) 0:01:23.638 ******* 2026-02-02 00:55:24.413363 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 00:55:24.413401 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.413420 | orchestrator | } 2026-02-02 00:55:24.413437 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 00:55:24.413454 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.413472 | orchestrator | } 2026-02-02 00:55:24.413489 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 00:55:24.413505 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.413520 | orchestrator | } 2026-02-02 00:55:24.413535 | orchestrator | 2026-02-02 00:55:24.413557 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 00:55:24.413574 | orchestrator | Monday 02 February 2026 00:52:59 +0000 (0:00:00.427) 0:01:24.066 ******* 2026-02-02 00:55:24.413590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.413607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.413623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.413671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.413689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.413706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.413724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.413747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.413764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.413781 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.413798 | orchestrator | 2026-02-02 00:55:24.413815 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-02 00:55:24.413840 | orchestrator | Monday 02 February 2026 00:53:02 +0000 (0:00:02.443) 0:01:26.510 ******* 2026-02-02 00:55:24.413857 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-02 00:55:24.413873 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-02 00:55:24.413890 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-02 00:55:24.413904 | orchestrator | 2026-02-02 00:55:24.413921 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-02 00:55:24.413937 | orchestrator | Monday 02 February 2026 00:53:03 +0000 (0:00:00.957) 0:01:27.467 ******* 2026-02-02 00:55:24.413953 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 00:55:24.413968 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.413985 | orchestrator | } 2026-02-02 00:55:24.414001 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 00:55:24.414060 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.414082 | orchestrator | } 2026-02-02 00:55:24.414098 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 00:55:24.414113 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.414140 | orchestrator | } 2026-02-02 00:55:24.414157 | orchestrator | 2026-02-02 00:55:24.414173 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 00:55:24.414192 | orchestrator | Monday 02 February 2026 00:53:04 +0000 (0:00:00.981) 0:01:28.448 ******* 2026-02-02 00:55:24.414209 | orchestrator | 2026-02-02 00:55:24.414226 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 00:55:24.414242 | orchestrator | Monday 02 February 2026 00:53:04 +0000 (0:00:00.068) 0:01:28.517 ******* 2026-02-02 00:55:24.414258 | orchestrator | 2026-02-02 00:55:24.414276 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 00:55:24.414292 | orchestrator | Monday 02 February 2026 00:53:04 +0000 (0:00:00.068) 0:01:28.586 ******* 2026-02-02 00:55:24.414309 | orchestrator | 2026-02-02 00:55:24.414325 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-02 00:55:24.414341 | orchestrator | Monday 02 February 2026 00:53:04 +0000 (0:00:00.070) 0:01:28.657 ******* 2026-02-02 00:55:24.414357 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:55:24.414394 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:55:24.414412 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:55:24.414428 | orchestrator | 2026-02-02 00:55:24.414443 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-02 00:55:24.414458 | orchestrator | Monday 02 February 2026 00:53:17 +0000 (0:00:13.417) 0:01:42.075 ******* 2026-02-02 00:55:24.414474 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:55:24.414489 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:55:24.414505 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:55:24.414521 | orchestrator | 2026-02-02 00:55:24.414538 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-02 00:55:24.414553 | orchestrator | Monday 02 February 2026 00:53:33 +0000 (0:00:15.508) 0:01:57.583 ******* 2026-02-02 00:55:24.414570 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-02 00:55:24.414586 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-02 00:55:24.414601 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-02 00:55:24.414616 | orchestrator | 2026-02-02 00:55:24.414632 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-02 00:55:24.414649 | orchestrator | Monday 02 February 2026 00:53:47 +0000 (0:00:13.610) 0:02:11.193 ******* 2026-02-02 00:55:24.414665 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:55:24.414681 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:55:24.414696 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:55:24.414712 | orchestrator | 2026-02-02 00:55:24.414729 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-02 00:55:24.414746 | orchestrator | Monday 02 February 2026 00:54:02 +0000 (0:00:15.601) 0:02:26.795 ******* 2026-02-02 00:55:24.414764 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.414794 | orchestrator | 2026-02-02 00:55:24.414810 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-02 00:55:24.414826 | orchestrator | Monday 02 February 2026 00:54:02 +0000 (0:00:00.133) 0:02:26.928 ******* 2026-02-02 00:55:24.414851 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.414867 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.414882 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.414898 | orchestrator | 2026-02-02 00:55:24.414914 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-02 00:55:24.414931 | orchestrator | Monday 02 February 2026 00:54:03 +0000 (0:00:00.834) 0:02:27.763 ******* 2026-02-02 00:55:24.414947 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.414962 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.414979 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:55:24.414994 | orchestrator | 2026-02-02 00:55:24.415011 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-02 00:55:24.415028 | orchestrator | Monday 02 February 2026 00:54:04 +0000 (0:00:00.642) 0:02:28.405 ******* 2026-02-02 00:55:24.415039 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.415049 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.415058 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.415068 | orchestrator | 2026-02-02 00:55:24.415078 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-02 00:55:24.415087 | orchestrator | Monday 02 February 2026 00:54:05 +0000 (0:00:01.132) 0:02:29.538 ******* 2026-02-02 00:55:24.415097 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.415106 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.415116 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:55:24.415125 | orchestrator | 2026-02-02 00:55:24.415135 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-02 00:55:24.415144 | orchestrator | Monday 02 February 2026 00:54:06 +0000 (0:00:00.660) 0:02:30.198 ******* 2026-02-02 00:55:24.415154 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.415163 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.415173 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.415182 | orchestrator | 2026-02-02 00:55:24.415192 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-02 00:55:24.415202 | orchestrator | Monday 02 February 2026 00:54:06 +0000 (0:00:00.741) 0:02:30.940 ******* 2026-02-02 00:55:24.415211 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.415220 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.415230 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.415239 | orchestrator | 2026-02-02 00:55:24.415249 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-02 00:55:24.415258 | orchestrator | Monday 02 February 2026 00:54:07 +0000 (0:00:00.733) 0:02:31.673 ******* 2026-02-02 00:55:24.415268 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-02 00:55:24.415278 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-02 00:55:24.415287 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-02 00:55:24.415297 | orchestrator | 2026-02-02 00:55:24.415306 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-02 00:55:24.415316 | orchestrator | Monday 02 February 2026 00:54:08 +0000 (0:00:01.057) 0:02:32.730 ******* 2026-02-02 00:55:24.415325 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.415335 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.415344 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.415354 | orchestrator | 2026-02-02 00:55:24.415364 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-02 00:55:24.415541 | orchestrator | Monday 02 February 2026 00:54:08 +0000 (0:00:00.358) 0:02:33.089 ******* 2026-02-02 00:55:24.415574 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415598 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415609 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415626 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415636 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415646 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.415667 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415674 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.415693 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.415707 | orchestrator | 2026-02-02 00:55:24.415717 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-02 00:55:24.415724 | orchestrator | Monday 02 February 2026 00:54:11 +0000 (0:00:03.018) 0:02:36.108 ******* 2026-02-02 00:55:24.415731 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415738 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415745 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415756 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415781 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.415798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.415812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.415823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.415835 | orchestrator | 2026-02-02 00:55:24.415842 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-02 00:55:24.415849 | orchestrator | Monday 02 February 2026 00:54:17 +0000 (0:00:05.538) 0:02:41.646 ******* 2026-02-02 00:55:24.415856 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-02 00:55:24.415863 | orchestrator | 2026-02-02 00:55:24.415870 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-02 00:55:24.415877 | orchestrator | Monday 02 February 2026 00:54:18 +0000 (0:00:00.871) 0:02:42.517 ******* 2026-02-02 00:55:24.415883 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.415890 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.415897 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.415903 | orchestrator | 2026-02-02 00:55:24.415910 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-02 00:55:24.415916 | orchestrator | Monday 02 February 2026 00:54:19 +0000 (0:00:00.740) 0:02:43.258 ******* 2026-02-02 00:55:24.415923 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.415930 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.415936 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.415943 | orchestrator | 2026-02-02 00:55:24.415949 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-02 00:55:24.415956 | orchestrator | Monday 02 February 2026 00:54:20 +0000 (0:00:01.725) 0:02:44.983 ******* 2026-02-02 00:55:24.415963 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.415969 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.415976 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.415982 | orchestrator | 2026-02-02 00:55:24.415989 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-02 00:55:24.415996 | orchestrator | Monday 02 February 2026 00:54:23 +0000 (0:00:02.369) 0:02:47.353 ******* 2026-02-02 00:55:24.416003 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.416010 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.416017 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.416028 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.416035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.416047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.416078 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.416100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.416114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.416124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.416131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.416143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.416150 | orchestrator | 2026-02-02 00:55:24.416157 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-02 00:55:24.416164 | orchestrator | Monday 02 February 2026 00:54:27 +0000 (0:00:04.659) 0:02:52.013 ******* 2026-02-02 00:55:24.416171 | orchestrator | ok: [testbed-node-0] => { 2026-02-02 00:55:24.416178 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.416185 | orchestrator | } 2026-02-02 00:55:24.416191 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 00:55:24.416198 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.416205 | orchestrator | } 2026-02-02 00:55:24.416211 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 00:55:24.416218 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.416225 | orchestrator | } 2026-02-02 00:55:24.416231 | orchestrator | 2026-02-02 00:55:24.416238 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 00:55:24.416245 | orchestrator | Monday 02 February 2026 00:54:28 +0000 (0:00:00.380) 0:02:52.394 ******* 2026-02-02 00:55:24.416257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.416264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.416271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.416278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.416288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.416299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.416306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.416314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.416325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:55:24.416332 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 00:55:24.416339 | orchestrator | 2026-02-02 00:55:24.416346 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-02 00:55:24.416353 | orchestrator | Monday 02 February 2026 00:54:31 +0000 (0:00:03.231) 0:02:55.625 ******* 2026-02-02 00:55:24.416360 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-02 00:55:24.416366 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-02 00:55:24.416386 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-02 00:55:24.416393 | orchestrator | 2026-02-02 00:55:24.416399 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-02 00:55:24.416406 | orchestrator | Monday 02 February 2026 00:54:32 +0000 (0:00:01.302) 0:02:56.928 ******* 2026-02-02 00:55:24.416413 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 00:55:24.416420 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.416426 | orchestrator | } 2026-02-02 00:55:24.416433 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 00:55:24.416444 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.416450 | orchestrator | } 2026-02-02 00:55:24.416457 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 00:55:24.416464 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:55:24.416470 | orchestrator | } 2026-02-02 00:55:24.416477 | orchestrator | 2026-02-02 00:55:24.416484 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 00:55:24.416491 | orchestrator | Monday 02 February 2026 00:54:33 +0000 (0:00:00.560) 0:02:57.488 ******* 2026-02-02 00:55:24.416497 | orchestrator | 2026-02-02 00:55:24.416504 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 00:55:24.416514 | orchestrator | Monday 02 February 2026 00:54:33 +0000 (0:00:00.062) 0:02:57.551 ******* 2026-02-02 00:55:24.416521 | orchestrator | 2026-02-02 00:55:24.416527 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 00:55:24.416534 | orchestrator | Monday 02 February 2026 00:54:33 +0000 (0:00:00.070) 0:02:57.622 ******* 2026-02-02 00:55:24.416541 | orchestrator | 2026-02-02 00:55:24.416548 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-02 00:55:24.416554 | orchestrator | Monday 02 February 2026 00:54:33 +0000 (0:00:00.066) 0:02:57.688 ******* 2026-02-02 00:55:24.416561 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:55:24.416568 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:55:24.416574 | orchestrator | 2026-02-02 00:55:24.416581 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-02 00:55:24.416588 | orchestrator | Monday 02 February 2026 00:54:46 +0000 (0:00:12.833) 0:03:10.522 ******* 2026-02-02 00:55:24.416594 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:55:24.416601 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:55:24.416608 | orchestrator | 2026-02-02 00:55:24.416615 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-02 00:55:24.416621 | orchestrator | Monday 02 February 2026 00:54:59 +0000 (0:00:13.117) 0:03:23.640 ******* 2026-02-02 00:55:24.416628 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-02 00:55:24.416635 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-02 00:55:24.416641 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-02 00:55:24.416648 | orchestrator | 2026-02-02 00:55:24.416655 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-02 00:55:24.416661 | orchestrator | Monday 02 February 2026 00:55:15 +0000 (0:00:16.023) 0:03:39.663 ******* 2026-02-02 00:55:24.416668 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:55:24.416675 | orchestrator | 2026-02-02 00:55:24.416681 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-02 00:55:24.416688 | orchestrator | Monday 02 February 2026 00:55:15 +0000 (0:00:00.133) 0:03:39.796 ******* 2026-02-02 00:55:24.416695 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.416701 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.416708 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.416716 | orchestrator | 2026-02-02 00:55:24.416728 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-02 00:55:24.416739 | orchestrator | Monday 02 February 2026 00:55:16 +0000 (0:00:00.950) 0:03:40.746 ******* 2026-02-02 00:55:24.416756 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.416770 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.416781 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:55:24.416791 | orchestrator | 2026-02-02 00:55:24.416802 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-02 00:55:24.416812 | orchestrator | Monday 02 February 2026 00:55:17 +0000 (0:00:00.806) 0:03:41.553 ******* 2026-02-02 00:55:24.416824 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.416837 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.416849 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.416860 | orchestrator | 2026-02-02 00:55:24.416872 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-02 00:55:24.416898 | orchestrator | Monday 02 February 2026 00:55:18 +0000 (0:00:01.166) 0:03:42.720 ******* 2026-02-02 00:55:24.416906 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:55:24.416912 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:55:24.416919 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:55:24.416926 | orchestrator | 2026-02-02 00:55:24.416932 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-02 00:55:24.416939 | orchestrator | Monday 02 February 2026 00:55:19 +0000 (0:00:00.638) 0:03:43.358 ******* 2026-02-02 00:55:24.416945 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.416952 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.416959 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.416965 | orchestrator | 2026-02-02 00:55:24.416972 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-02 00:55:24.416979 | orchestrator | Monday 02 February 2026 00:55:20 +0000 (0:00:00.858) 0:03:44.217 ******* 2026-02-02 00:55:24.416986 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:55:24.416992 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:55:24.416999 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:55:24.417005 | orchestrator | 2026-02-02 00:55:24.417012 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-02 00:55:24.417019 | orchestrator | Monday 02 February 2026 00:55:21 +0000 (0:00:00.959) 0:03:45.176 ******* 2026-02-02 00:55:24.417025 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-02 00:55:24.417032 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-02 00:55:24.417039 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-02 00:55:24.417045 | orchestrator | 2026-02-02 00:55:24.417052 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:55:24.417059 | orchestrator | testbed-node-0 : ok=65  changed=29  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-02 00:55:24.417067 | orchestrator | testbed-node-1 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-02-02 00:55:24.417073 | orchestrator | testbed-node-2 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-02-02 00:55:24.417080 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:55:24.417087 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:55:24.417097 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 00:55:24.417104 | orchestrator | 2026-02-02 00:55:24.417111 | orchestrator | 2026-02-02 00:55:24.417118 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:55:24.417124 | orchestrator | Monday 02 February 2026 00:55:22 +0000 (0:00:01.308) 0:03:46.484 ******* 2026-02-02 00:55:24.417131 | orchestrator | =============================================================================== 2026-02-02 00:55:24.417137 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 29.63s 2026-02-02 00:55:24.417144 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 28.63s 2026-02-02 00:55:24.417151 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 26.25s 2026-02-02 00:55:24.417157 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.83s 2026-02-02 00:55:24.417164 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 15.60s 2026-02-02 00:55:24.417170 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 8.79s 2026-02-02 00:55:24.417177 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.54s 2026-02-02 00:55:24.417188 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.66s 2026-02-02 00:55:24.417195 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.66s 2026-02-02 00:55:24.417201 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 3.80s 2026-02-02 00:55:24.417208 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.67s 2026-02-02 00:55:24.417214 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.23s 2026-02-02 00:55:24.417221 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.02s 2026-02-02 00:55:24.417228 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.99s 2026-02-02 00:55:24.417234 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.55s 2026-02-02 00:55:24.417241 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.44s 2026-02-02 00:55:24.417247 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.37s 2026-02-02 00:55:24.417254 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.16s 2026-02-02 00:55:24.417261 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.11s 2026-02-02 00:55:24.417267 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.97s 2026-02-02 00:55:24.417274 | orchestrator | 2026-02-02 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:27.444224 | orchestrator | 2026-02-02 00:55:27 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:27.445552 | orchestrator | 2026-02-02 00:55:27 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:27.445586 | orchestrator | 2026-02-02 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:30.497204 | orchestrator | 2026-02-02 00:55:30 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:30.498349 | orchestrator | 2026-02-02 00:55:30 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:30.498454 | orchestrator | 2026-02-02 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:33.541892 | orchestrator | 2026-02-02 00:55:33 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:33.543050 | orchestrator | 2026-02-02 00:55:33 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:33.543188 | orchestrator | 2026-02-02 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:36.589951 | orchestrator | 2026-02-02 00:55:36 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:36.590475 | orchestrator | 2026-02-02 00:55:36 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:36.590518 | orchestrator | 2026-02-02 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:39.658824 | orchestrator | 2026-02-02 00:55:39 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:39.663009 | orchestrator | 2026-02-02 00:55:39 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:39.663095 | orchestrator | 2026-02-02 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:42.704745 | orchestrator | 2026-02-02 00:55:42 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:42.706879 | orchestrator | 2026-02-02 00:55:42 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:42.706968 | orchestrator | 2026-02-02 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:45.759587 | orchestrator | 2026-02-02 00:55:45 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:45.761754 | orchestrator | 2026-02-02 00:55:45 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:45.761834 | orchestrator | 2026-02-02 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:48.814283 | orchestrator | 2026-02-02 00:55:48 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:48.817800 | orchestrator | 2026-02-02 00:55:48 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:48.817884 | orchestrator | 2026-02-02 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:51.861051 | orchestrator | 2026-02-02 00:55:51 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:51.862119 | orchestrator | 2026-02-02 00:55:51 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:51.862206 | orchestrator | 2026-02-02 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:54.908040 | orchestrator | 2026-02-02 00:55:54 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:54.911875 | orchestrator | 2026-02-02 00:55:54 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:54.911937 | orchestrator | 2026-02-02 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:55:57.953284 | orchestrator | 2026-02-02 00:55:57 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:55:57.954888 | orchestrator | 2026-02-02 00:55:57 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:55:57.954963 | orchestrator | 2026-02-02 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:00.993699 | orchestrator | 2026-02-02 00:56:00 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:00.994994 | orchestrator | 2026-02-02 00:56:00 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:00.995151 | orchestrator | 2026-02-02 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:04.030904 | orchestrator | 2026-02-02 00:56:04 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:04.032017 | orchestrator | 2026-02-02 00:56:04 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:04.032056 | orchestrator | 2026-02-02 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:07.089928 | orchestrator | 2026-02-02 00:56:07 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:07.092686 | orchestrator | 2026-02-02 00:56:07 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:07.092750 | orchestrator | 2026-02-02 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:10.133962 | orchestrator | 2026-02-02 00:56:10 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:10.135554 | orchestrator | 2026-02-02 00:56:10 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:10.135589 | orchestrator | 2026-02-02 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:13.179651 | orchestrator | 2026-02-02 00:56:13 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:13.182720 | orchestrator | 2026-02-02 00:56:13 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:13.182784 | orchestrator | 2026-02-02 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:16.232230 | orchestrator | 2026-02-02 00:56:16 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:16.233518 | orchestrator | 2026-02-02 00:56:16 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:16.233564 | orchestrator | 2026-02-02 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:19.285743 | orchestrator | 2026-02-02 00:56:19 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:19.285844 | orchestrator | 2026-02-02 00:56:19 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:19.285861 | orchestrator | 2026-02-02 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:22.337781 | orchestrator | 2026-02-02 00:56:22 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:22.338691 | orchestrator | 2026-02-02 00:56:22 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:22.338788 | orchestrator | 2026-02-02 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:25.387608 | orchestrator | 2026-02-02 00:56:25 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:25.388584 | orchestrator | 2026-02-02 00:56:25 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:25.388621 | orchestrator | 2026-02-02 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:28.436357 | orchestrator | 2026-02-02 00:56:28 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:28.445130 | orchestrator | 2026-02-02 00:56:28 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:28.445219 | orchestrator | 2026-02-02 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:31.482261 | orchestrator | 2026-02-02 00:56:31 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:31.485721 | orchestrator | 2026-02-02 00:56:31 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:31.485774 | orchestrator | 2026-02-02 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:34.530437 | orchestrator | 2026-02-02 00:56:34 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:34.531832 | orchestrator | 2026-02-02 00:56:34 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:34.531892 | orchestrator | 2026-02-02 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:37.580096 | orchestrator | 2026-02-02 00:56:37 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:37.580190 | orchestrator | 2026-02-02 00:56:37 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:37.580203 | orchestrator | 2026-02-02 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:40.620730 | orchestrator | 2026-02-02 00:56:40 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:40.622142 | orchestrator | 2026-02-02 00:56:40 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:40.622188 | orchestrator | 2026-02-02 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:43.666376 | orchestrator | 2026-02-02 00:56:43 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:43.666521 | orchestrator | 2026-02-02 00:56:43 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:43.666597 | orchestrator | 2026-02-02 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:46.705076 | orchestrator | 2026-02-02 00:56:46 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:46.705417 | orchestrator | 2026-02-02 00:56:46 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:46.705443 | orchestrator | 2026-02-02 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:49.769776 | orchestrator | 2026-02-02 00:56:49 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:49.773385 | orchestrator | 2026-02-02 00:56:49 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:49.773606 | orchestrator | 2026-02-02 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:52.820986 | orchestrator | 2026-02-02 00:56:52 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:52.824870 | orchestrator | 2026-02-02 00:56:52 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:52.824929 | orchestrator | 2026-02-02 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:55.861093 | orchestrator | 2026-02-02 00:56:55 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:55.861734 | orchestrator | 2026-02-02 00:56:55 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:55.862496 | orchestrator | 2026-02-02 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:56:58.902758 | orchestrator | 2026-02-02 00:56:58 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:56:58.904100 | orchestrator | 2026-02-02 00:56:58 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:56:58.904179 | orchestrator | 2026-02-02 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:01.947038 | orchestrator | 2026-02-02 00:57:01 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:01.948156 | orchestrator | 2026-02-02 00:57:01 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:57:01.948409 | orchestrator | 2026-02-02 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:04.985772 | orchestrator | 2026-02-02 00:57:04 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:04.986082 | orchestrator | 2026-02-02 00:57:04 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:57:04.986123 | orchestrator | 2026-02-02 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:08.033024 | orchestrator | 2026-02-02 00:57:08 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:08.034930 | orchestrator | 2026-02-02 00:57:08 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:57:08.034969 | orchestrator | 2026-02-02 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:11.103397 | orchestrator | 2026-02-02 00:57:11 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:11.103629 | orchestrator | 2026-02-02 00:57:11 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:57:11.103656 | orchestrator | 2026-02-02 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:14.140281 | orchestrator | 2026-02-02 00:57:14 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:14.140666 | orchestrator | 2026-02-02 00:57:14 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:57:14.140699 | orchestrator | 2026-02-02 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:17.189350 | orchestrator | 2026-02-02 00:57:17 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:17.190201 | orchestrator | 2026-02-02 00:57:17 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:57:17.190352 | orchestrator | 2026-02-02 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:20.231270 | orchestrator | 2026-02-02 00:57:20 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:20.233906 | orchestrator | 2026-02-02 00:57:20 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:57:20.233987 | orchestrator | 2026-02-02 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:23.290182 | orchestrator | 2026-02-02 00:57:23 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:23.290339 | orchestrator | 2026-02-02 00:57:23 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state STARTED 2026-02-02 00:57:23.290357 | orchestrator | 2026-02-02 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:26.337941 | orchestrator | 2026-02-02 00:57:26 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:57:26.339818 | orchestrator | 2026-02-02 00:57:26 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:57:26.341856 | orchestrator | 2026-02-02 00:57:26 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:26.351752 | orchestrator | 2026-02-02 00:57:26 | INFO  | Task 4369018e-7af5-4c9c-ba9a-c5b25664a516 is in state SUCCESS 2026-02-02 00:57:26.354334 | orchestrator | 2026-02-02 00:57:26.354415 | orchestrator | 2026-02-02 00:57:26.354432 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 00:57:26.354446 | orchestrator | 2026-02-02 00:57:26.354457 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 00:57:26.354469 | orchestrator | Monday 02 February 2026 00:50:16 +0000 (0:00:00.340) 0:00:00.340 ******* 2026-02-02 00:57:26.354481 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.354493 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.354504 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.354515 | orchestrator | 2026-02-02 00:57:26.354526 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 00:57:26.354538 | orchestrator | Monday 02 February 2026 00:50:16 +0000 (0:00:00.335) 0:00:00.675 ******* 2026-02-02 00:57:26.354550 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-02 00:57:26.354561 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-02 00:57:26.354572 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-02 00:57:26.354583 | orchestrator | 2026-02-02 00:57:26.354828 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-02 00:57:26.354861 | orchestrator | 2026-02-02 00:57:26.355283 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-02 00:57:26.355305 | orchestrator | Monday 02 February 2026 00:50:17 +0000 (0:00:00.688) 0:00:01.363 ******* 2026-02-02 00:57:26.355317 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.355328 | orchestrator | 2026-02-02 00:57:26.355340 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-02 00:57:26.355350 | orchestrator | Monday 02 February 2026 00:50:17 +0000 (0:00:00.557) 0:00:01.920 ******* 2026-02-02 00:57:26.355390 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.355401 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.355412 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.355423 | orchestrator | 2026-02-02 00:57:26.355571 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-02 00:57:26.355600 | orchestrator | Monday 02 February 2026 00:50:18 +0000 (0:00:00.649) 0:00:02.569 ******* 2026-02-02 00:57:26.356009 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.356031 | orchestrator | 2026-02-02 00:57:26.356050 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-02 00:57:26.356071 | orchestrator | Monday 02 February 2026 00:50:19 +0000 (0:00:01.025) 0:00:03.595 ******* 2026-02-02 00:57:26.356090 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.356107 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.356126 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.356146 | orchestrator | 2026-02-02 00:57:26.356164 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-02 00:57:26.356184 | orchestrator | Monday 02 February 2026 00:50:20 +0000 (0:00:00.862) 0:00:04.458 ******* 2026-02-02 00:57:26.356239 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-02 00:57:26.356260 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-02 00:57:26.356278 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-02 00:57:26.356295 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-02 00:57:26.356312 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-02 00:57:26.356329 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-02 00:57:26.356347 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-02 00:57:26.356366 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-02 00:57:26.356385 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-02 00:57:26.356403 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-02 00:57:26.356422 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-02 00:57:26.356439 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-02 00:57:26.356458 | orchestrator | 2026-02-02 00:57:26.356477 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-02 00:57:26.356496 | orchestrator | Monday 02 February 2026 00:50:24 +0000 (0:00:04.547) 0:00:09.005 ******* 2026-02-02 00:57:26.356514 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-02 00:57:26.356527 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-02 00:57:26.356538 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-02 00:57:26.356549 | orchestrator | 2026-02-02 00:57:26.356562 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-02 00:57:26.356575 | orchestrator | Monday 02 February 2026 00:50:26 +0000 (0:00:01.222) 0:00:10.228 ******* 2026-02-02 00:57:26.356588 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-02 00:57:26.357130 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-02 00:57:26.357153 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-02 00:57:26.357164 | orchestrator | 2026-02-02 00:57:26.357175 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-02 00:57:26.357186 | orchestrator | Monday 02 February 2026 00:50:29 +0000 (0:00:02.865) 0:00:13.093 ******* 2026-02-02 00:57:26.357197 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-02 00:57:26.357256 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.357317 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-02 00:57:26.357330 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.357341 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-02 00:57:26.357801 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.357829 | orchestrator | 2026-02-02 00:57:26.357849 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-02 00:57:26.357867 | orchestrator | Monday 02 February 2026 00:50:30 +0000 (0:00:01.471) 0:00:14.565 ******* 2026-02-02 00:57:26.357904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.357931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.357949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.357961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.357973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.358620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.358671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.358690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.358702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.358714 | orchestrator | 2026-02-02 00:57:26.358725 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-02 00:57:26.358736 | orchestrator | Monday 02 February 2026 00:50:34 +0000 (0:00:03.706) 0:00:18.272 ******* 2026-02-02 00:57:26.358747 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.358758 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.358768 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.358779 | orchestrator | 2026-02-02 00:57:26.358790 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-02 00:57:26.358801 | orchestrator | Monday 02 February 2026 00:50:35 +0000 (0:00:01.594) 0:00:19.866 ******* 2026-02-02 00:57:26.358812 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-02 00:57:26.358823 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-02 00:57:26.358833 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-02 00:57:26.358843 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-02 00:57:26.358852 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-02 00:57:26.358862 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-02 00:57:26.358871 | orchestrator | 2026-02-02 00:57:26.358881 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-02 00:57:26.358890 | orchestrator | Monday 02 February 2026 00:50:38 +0000 (0:00:02.967) 0:00:22.834 ******* 2026-02-02 00:57:26.359837 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.359860 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.359877 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.359891 | orchestrator | 2026-02-02 00:57:26.359906 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-02 00:57:26.359921 | orchestrator | Monday 02 February 2026 00:50:41 +0000 (0:00:02.515) 0:00:25.350 ******* 2026-02-02 00:57:26.359936 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.359969 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.359986 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.360001 | orchestrator | 2026-02-02 00:57:26.360018 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-02 00:57:26.360035 | orchestrator | Monday 02 February 2026 00:50:43 +0000 (0:00:02.461) 0:00:27.811 ******* 2026-02-02 00:57:26.360053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.360643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.360798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.360836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.360856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.360873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.360906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__76e5f2dff2da73d062658184a01db2741acf0ac3', '__omit_place_holder__76e5f2dff2da73d062658184a01db2741acf0ac3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 00:57:26.360924 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.360941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__76e5f2dff2da73d062658184a01db2741acf0ac3', '__omit_place_holder__76e5f2dff2da73d062658184a01db2741acf0ac3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 00:57:26.360957 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.361702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.361907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.361945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.361960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__76e5f2dff2da73d062658184a01db2741acf0ac3', '__omit_place_holder__76e5f2dff2da73d062658184a01db2741acf0ac3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 00:57:26.361984 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.361994 | orchestrator | 2026-02-02 00:57:26.362010 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-02 00:57:26.362135 | orchestrator | Monday 02 February 2026 00:50:44 +0000 (0:00:00.992) 0:00:28.803 ******* 2026-02-02 00:57:26.362155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.362173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.362243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.362270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.362286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.362302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__76e5f2dff2da73d062658184a01db2741acf0ac3', '__omit_place_holder__76e5f2dff2da73d062658184a01db2741acf0ac3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 00:57:26.362337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.362355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.362373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__76e5f2dff2da73d062658184a01db2741acf0ac3', '__omit_place_holder__76e5f2dff2da73d062658184a01db2741acf0ac3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 00:57:26.362404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.362426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.362443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__76e5f2dff2da73d062658184a01db2741acf0ac3', '__omit_place_holder__76e5f2dff2da73d062658184a01db2741acf0ac3'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 00:57:26.362460 | orchestrator | 2026-02-02 00:57:26.362488 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-02 00:57:26.362505 | orchestrator | Monday 02 February 2026 00:50:47 +0000 (0:00:03.106) 0:00:31.909 ******* 2026-02-02 00:57:26.362523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.362542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.362560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.362589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.362701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.362720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.362750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.362803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.362818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.362884 | orchestrator | 2026-02-02 00:57:26.362895 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-02 00:57:26.362905 | orchestrator | Monday 02 February 2026 00:50:53 +0000 (0:00:05.451) 0:00:37.361 ******* 2026-02-02 00:57:26.362915 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-02 00:57:26.362925 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-02 00:57:26.362934 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-02 00:57:26.362944 | orchestrator | 2026-02-02 00:57:26.362954 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-02 00:57:26.362963 | orchestrator | Monday 02 February 2026 00:50:56 +0000 (0:00:03.147) 0:00:40.508 ******* 2026-02-02 00:57:26.362974 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-02 00:57:26.362983 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-02 00:57:26.362993 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-02 00:57:26.363003 | orchestrator | 2026-02-02 00:57:26.363022 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-02 00:57:26.363032 | orchestrator | Monday 02 February 2026 00:51:02 +0000 (0:00:06.048) 0:00:46.557 ******* 2026-02-02 00:57:26.363042 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.363052 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.363061 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.363071 | orchestrator | 2026-02-02 00:57:26.363081 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-02 00:57:26.363091 | orchestrator | Monday 02 February 2026 00:51:03 +0000 (0:00:00.542) 0:00:47.100 ******* 2026-02-02 00:57:26.363101 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-02 00:57:26.363112 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-02 00:57:26.363122 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-02 00:57:26.363139 | orchestrator | 2026-02-02 00:57:26.363154 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-02 00:57:26.363164 | orchestrator | Monday 02 February 2026 00:51:04 +0000 (0:00:01.898) 0:00:48.998 ******* 2026-02-02 00:57:26.363174 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-02 00:57:26.363184 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-02 00:57:26.363193 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-02 00:57:26.363203 | orchestrator | 2026-02-02 00:57:26.363237 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-02 00:57:26.363255 | orchestrator | Monday 02 February 2026 00:51:07 +0000 (0:00:02.723) 0:00:51.722 ******* 2026-02-02 00:57:26.363271 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.363407 | orchestrator | 2026-02-02 00:57:26.363432 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-02 00:57:26.363450 | orchestrator | Monday 02 February 2026 00:51:08 +0000 (0:00:01.118) 0:00:52.840 ******* 2026-02-02 00:57:26.363469 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-02 00:57:26.363485 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-02 00:57:26.363501 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-02 00:57:26.363518 | orchestrator | 2026-02-02 00:57:26.363534 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-02 00:57:26.363551 | orchestrator | Monday 02 February 2026 00:51:11 +0000 (0:00:02.321) 0:00:55.162 ******* 2026-02-02 00:57:26.363567 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-02 00:57:26.363585 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-02 00:57:26.363602 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-02 00:57:26.363619 | orchestrator | 2026-02-02 00:57:26.363637 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-02 00:57:26.363654 | orchestrator | Monday 02 February 2026 00:51:14 +0000 (0:00:03.011) 0:00:58.173 ******* 2026-02-02 00:57:26.363672 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.363684 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.363699 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.363801 | orchestrator | 2026-02-02 00:57:26.363820 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-02 00:57:26.363835 | orchestrator | Monday 02 February 2026 00:51:14 +0000 (0:00:00.289) 0:00:58.463 ******* 2026-02-02 00:57:26.363850 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.363868 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.363884 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.363901 | orchestrator | 2026-02-02 00:57:26.363917 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-02 00:57:26.363932 | orchestrator | Monday 02 February 2026 00:51:14 +0000 (0:00:00.315) 0:00:58.778 ******* 2026-02-02 00:57:26.363944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.363977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.363988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.364008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.364026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.364042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.364058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.364074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.364177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.364197 | orchestrator | 2026-02-02 00:57:26.364286 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-02 00:57:26.364308 | orchestrator | Monday 02 February 2026 00:51:17 +0000 (0:00:03.177) 0:01:01.956 ******* 2026-02-02 00:57:26.364355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.364374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.364391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.364408 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.364426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.364591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.364623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.364634 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.364655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.364671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.364681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.364692 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.364701 | orchestrator | 2026-02-02 00:57:26.364711 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-02 00:57:26.364721 | orchestrator | Monday 02 February 2026 00:51:19 +0000 (0:00:01.358) 0:01:03.314 ******* 2026-02-02 00:57:26.364732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.364742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.364759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.364769 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.364788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.364799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.364807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.364816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.364824 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.364859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.364874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.364917 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.364926 | orchestrator | 2026-02-02 00:57:26.364935 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-02 00:57:26.364943 | orchestrator | Monday 02 February 2026 00:51:20 +0000 (0:00:01.344) 0:01:04.658 ******* 2026-02-02 00:57:26.364951 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-02 00:57:26.364960 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-02 00:57:26.364968 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-02 00:57:26.364975 | orchestrator | 2026-02-02 00:57:26.364983 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-02 00:57:26.364991 | orchestrator | Monday 02 February 2026 00:51:22 +0000 (0:00:01.562) 0:01:06.221 ******* 2026-02-02 00:57:26.364999 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-02 00:57:26.365013 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-02 00:57:26.365021 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-02 00:57:26.365029 | orchestrator | 2026-02-02 00:57:26.365037 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-02 00:57:26.365045 | orchestrator | Monday 02 February 2026 00:51:23 +0000 (0:00:01.800) 0:01:08.021 ******* 2026-02-02 00:57:26.365053 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 00:57:26.365061 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 00:57:26.365069 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 00:57:26.365077 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 00:57:26.365085 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.365097 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 00:57:26.365105 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.365113 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 00:57:26.365120 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.365128 | orchestrator | 2026-02-02 00:57:26.365136 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-02 00:57:26.365153 | orchestrator | Monday 02 February 2026 00:51:24 +0000 (0:00:01.001) 0:01:09.023 ******* 2026-02-02 00:57:26.365162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.365176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.365185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.365193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.365209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.365322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.365362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.365371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.365392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.365406 | orchestrator | 2026-02-02 00:57:26.365427 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-02 00:57:26.365443 | orchestrator | Monday 02 February 2026 00:51:27 +0000 (0:00:02.321) 0:01:11.345 ******* 2026-02-02 00:57:26.365453 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 00:57:26.365464 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:57:26.365476 | orchestrator | } 2026-02-02 00:57:26.365487 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 00:57:26.365497 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:57:26.365580 | orchestrator | } 2026-02-02 00:57:26.365588 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 00:57:26.365596 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:57:26.365602 | orchestrator | } 2026-02-02 00:57:26.365609 | orchestrator | 2026-02-02 00:57:26.365616 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 00:57:26.365622 | orchestrator | Monday 02 February 2026 00:51:27 +0000 (0:00:00.339) 0:01:11.684 ******* 2026-02-02 00:57:26.365629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.365647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.365659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.365667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.365682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.365689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.365696 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.365703 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.365710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.365717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.365730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.365737 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.365744 | orchestrator | 2026-02-02 00:57:26.365751 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-02 00:57:26.365758 | orchestrator | Monday 02 February 2026 00:51:29 +0000 (0:00:01.458) 0:01:13.142 ******* 2026-02-02 00:57:26.365764 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.365771 | orchestrator | 2026-02-02 00:57:26.365778 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-02 00:57:26.365789 | orchestrator | Monday 02 February 2026 00:51:29 +0000 (0:00:00.704) 0:01:13.846 ******* 2026-02-02 00:57:26.365802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.365812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.365820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.365827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.365839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.365860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.365872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.365879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.365886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.365894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.365907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.365914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.365929 | orchestrator | 2026-02-02 00:57:26.365935 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-02 00:57:26.365945 | orchestrator | Monday 02 February 2026 00:51:34 +0000 (0:00:04.835) 0:01:18.682 ******* 2026-02-02 00:57:26.365953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.365960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.365967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.365974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.365986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.365998 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.366009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.366077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366096 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.366107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.366119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.366148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366190 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.366202 | orchestrator | 2026-02-02 00:57:26.366229 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-02 00:57:26.366238 | orchestrator | Monday 02 February 2026 00:51:35 +0000 (0:00:00.676) 0:01:19.359 ******* 2026-02-02 00:57:26.366245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.366253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.366281 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.366288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.366295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.366302 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.366309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.366315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.366322 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.366329 | orchestrator | 2026-02-02 00:57:26.366336 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-02 00:57:26.366343 | orchestrator | Monday 02 February 2026 00:51:36 +0000 (0:00:01.027) 0:01:20.386 ******* 2026-02-02 00:57:26.366350 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.366356 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.366363 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.366370 | orchestrator | 2026-02-02 00:57:26.366376 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-02 00:57:26.366383 | orchestrator | Monday 02 February 2026 00:51:37 +0000 (0:00:01.631) 0:01:22.017 ******* 2026-02-02 00:57:26.366396 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.366403 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.366409 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.366416 | orchestrator | 2026-02-02 00:57:26.366423 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-02 00:57:26.366429 | orchestrator | Monday 02 February 2026 00:51:39 +0000 (0:00:02.006) 0:01:24.024 ******* 2026-02-02 00:57:26.366436 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.366443 | orchestrator | 2026-02-02 00:57:26.366449 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-02 00:57:26.366456 | orchestrator | Monday 02 February 2026 00:51:41 +0000 (0:00:01.065) 0:01:25.089 ******* 2026-02-02 00:57:26.366470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.366484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.366512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.366525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366557 | orchestrator | 2026-02-02 00:57:26.366568 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-02 00:57:26.366580 | orchestrator | Monday 02 February 2026 00:51:46 +0000 (0:00:05.559) 0:01:30.648 ******* 2026-02-02 00:57:26.366598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.366616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366638 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.366654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.366667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366697 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.366716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.366728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.366750 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.366757 | orchestrator | 2026-02-02 00:57:26.366764 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-02 00:57:26.366771 | orchestrator | Monday 02 February 2026 00:51:48 +0000 (0:00:01.597) 0:01:32.246 ******* 2026-02-02 00:57:26.366779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.366786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.366799 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.366806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.366814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.366821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.366829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.366836 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.366843 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.366849 | orchestrator | 2026-02-02 00:57:26.366856 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-02 00:57:26.366863 | orchestrator | Monday 02 February 2026 00:51:50 +0000 (0:00:02.092) 0:01:34.339 ******* 2026-02-02 00:57:26.366870 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.366876 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.366883 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.366890 | orchestrator | 2026-02-02 00:57:26.366896 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-02 00:57:26.366903 | orchestrator | Monday 02 February 2026 00:51:51 +0000 (0:00:01.237) 0:01:35.576 ******* 2026-02-02 00:57:26.366910 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.366916 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.366923 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.366930 | orchestrator | 2026-02-02 00:57:26.366937 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-02 00:57:26.366943 | orchestrator | Monday 02 February 2026 00:51:53 +0000 (0:00:02.083) 0:01:37.659 ******* 2026-02-02 00:57:26.366950 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.366957 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.366964 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.366970 | orchestrator | 2026-02-02 00:57:26.366981 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-02 00:57:26.366988 | orchestrator | Monday 02 February 2026 00:51:54 +0000 (0:00:00.619) 0:01:38.279 ******* 2026-02-02 00:57:26.366995 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.367002 | orchestrator | 2026-02-02 00:57:26.367008 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-02 00:57:26.367015 | orchestrator | Monday 02 February 2026 00:51:56 +0000 (0:00:01.908) 0:01:40.188 ******* 2026-02-02 00:57:26.367028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-02 00:57:26.367041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-02 00:57:26.367049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-02 00:57:26.367056 | orchestrator | 2026-02-02 00:57:26.367076 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-02 00:57:26.367083 | orchestrator | Monday 02 February 2026 00:52:01 +0000 (0:00:04.931) 0:01:45.119 ******* 2026-02-02 00:57:26.367090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-02 00:57:26.367097 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.367108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-02 00:57:26.367116 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.367126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-02 00:57:26.367138 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.367168 | orchestrator | 2026-02-02 00:57:26.367175 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-02 00:57:26.367181 | orchestrator | Monday 02 February 2026 00:52:03 +0000 (0:00:02.301) 0:01:47.421 ******* 2026-02-02 00:57:26.367189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-02 00:57:26.367198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-02 00:57:26.367206 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.367232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-02 00:57:26.367241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-02 00:57:26.367248 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.367255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-02 00:57:26.367293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-02 00:57:26.367302 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.367309 | orchestrator | 2026-02-02 00:57:26.367316 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-02 00:57:26.367328 | orchestrator | Monday 02 February 2026 00:52:05 +0000 (0:00:02.467) 0:01:49.889 ******* 2026-02-02 00:57:26.367334 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.367341 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.367348 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.367355 | orchestrator | 2026-02-02 00:57:26.367417 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-02 00:57:26.367426 | orchestrator | Monday 02 February 2026 00:52:06 +0000 (0:00:00.536) 0:01:50.425 ******* 2026-02-02 00:57:26.367433 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.367440 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.367450 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.367457 | orchestrator | 2026-02-02 00:57:26.367464 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-02 00:57:26.367471 | orchestrator | Monday 02 February 2026 00:52:07 +0000 (0:00:01.341) 0:01:51.766 ******* 2026-02-02 00:57:26.367477 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.367484 | orchestrator | 2026-02-02 00:57:26.367491 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-02 00:57:26.367498 | orchestrator | Monday 02 February 2026 00:52:08 +0000 (0:00:01.117) 0:01:52.883 ******* 2026-02-02 00:57:26.367506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.367514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.367522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.367598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367622 | orchestrator | 2026-02-02 00:57:26.367629 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-02 00:57:26.367636 | orchestrator | Monday 02 February 2026 00:52:13 +0000 (0:00:05.065) 0:01:57.949 ******* 2026-02-02 00:57:26.367643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.367653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367713 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.367726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.367738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367785 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.367801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.367815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.367858 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.367865 | orchestrator | 2026-02-02 00:57:26.367872 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-02 00:57:26.367879 | orchestrator | Monday 02 February 2026 00:52:15 +0000 (0:00:01.529) 0:01:59.478 ******* 2026-02-02 00:57:26.367886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.367899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.367906 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.367913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.367920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.367927 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.367940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.367947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.367954 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.367961 | orchestrator | 2026-02-02 00:57:26.367968 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-02 00:57:26.367975 | orchestrator | Monday 02 February 2026 00:52:16 +0000 (0:00:01.390) 0:02:00.869 ******* 2026-02-02 00:57:26.367982 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.367988 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.367995 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.368002 | orchestrator | 2026-02-02 00:57:26.368008 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-02 00:57:26.368015 | orchestrator | Monday 02 February 2026 00:52:19 +0000 (0:00:02.425) 0:02:03.295 ******* 2026-02-02 00:57:26.368022 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.368028 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.368035 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.368042 | orchestrator | 2026-02-02 00:57:26.368049 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-02 00:57:26.368055 | orchestrator | Monday 02 February 2026 00:52:21 +0000 (0:00:02.755) 0:02:06.050 ******* 2026-02-02 00:57:26.368063 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.368069 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.368076 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.368083 | orchestrator | 2026-02-02 00:57:26.368090 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-02 00:57:26.368096 | orchestrator | Monday 02 February 2026 00:52:22 +0000 (0:00:00.376) 0:02:06.427 ******* 2026-02-02 00:57:26.368108 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.368115 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.368121 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.368128 | orchestrator | 2026-02-02 00:57:26.368135 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-02 00:57:26.368141 | orchestrator | Monday 02 February 2026 00:52:22 +0000 (0:00:00.361) 0:02:06.788 ******* 2026-02-02 00:57:26.368148 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.368155 | orchestrator | 2026-02-02 00:57:26.368167 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-02 00:57:26.368178 | orchestrator | Monday 02 February 2026 00:52:23 +0000 (0:00:01.027) 0:02:07.816 ******* 2026-02-02 00:57:26.368190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.368208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 00:57:26.368272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.368366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 00:57:26.368378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.368391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 00:57:26.368417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368507 | orchestrator | 2026-02-02 00:57:26.368515 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-02 00:57:26.368521 | orchestrator | Monday 02 February 2026 00:52:27 +0000 (0:00:04.130) 0:02:11.946 ******* 2026-02-02 00:57:26.368531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.368543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 00:57:26.368550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.368556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.371820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.371892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.371913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 00:57:26.371941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.371954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.371966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.371978 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.371992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.372018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.372034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.372053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.372064 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.372075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.372085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 00:57:26.372096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.372112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.372126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.372142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.372152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.372162 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.372172 | orchestrator | 2026-02-02 00:57:26.372183 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-02 00:57:26.372193 | orchestrator | Monday 02 February 2026 00:52:28 +0000 (0:00:00.933) 0:02:12.880 ******* 2026-02-02 00:57:26.372204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.372246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.372259 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.372269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.372280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.372290 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.372300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.372311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.372320 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.372330 | orchestrator | 2026-02-02 00:57:26.372346 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-02 00:57:26.372362 | orchestrator | Monday 02 February 2026 00:52:30 +0000 (0:00:01.661) 0:02:14.542 ******* 2026-02-02 00:57:26.372372 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.372382 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.372392 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.372401 | orchestrator | 2026-02-02 00:57:26.372411 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-02 00:57:26.372421 | orchestrator | Monday 02 February 2026 00:52:31 +0000 (0:00:01.359) 0:02:15.901 ******* 2026-02-02 00:57:26.372431 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.372440 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.372450 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.372460 | orchestrator | 2026-02-02 00:57:26.372469 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-02 00:57:26.372479 | orchestrator | Monday 02 February 2026 00:52:33 +0000 (0:00:02.141) 0:02:18.042 ******* 2026-02-02 00:57:26.372489 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.372499 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.372509 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.372518 | orchestrator | 2026-02-02 00:57:26.372532 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-02 00:57:26.372542 | orchestrator | Monday 02 February 2026 00:52:34 +0000 (0:00:00.351) 0:02:18.394 ******* 2026-02-02 00:57:26.372552 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.372562 | orchestrator | 2026-02-02 00:57:26.372571 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-02 00:57:26.372581 | orchestrator | Monday 02 February 2026 00:52:35 +0000 (0:00:01.061) 0:02:19.455 ******* 2026-02-02 00:57:26.372594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 00:57:26.372616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 00:57:26.372638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 00:57:26.372657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 00:57:26.372677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 00:57:26.372690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 00:57:26.372706 | orchestrator | 2026-02-02 00:57:26.372722 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-02 00:57:26.372732 | orchestrator | Monday 02 February 2026 00:52:39 +0000 (0:00:04.462) 0:02:23.917 ******* 2026-02-02 00:57:26.372746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 00:57:26.372758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 00:57:26.372778 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.372801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 00:57:26.372813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 00:57:26.372841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 00:57:26.372853 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.372865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 00:57:26.372881 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.372891 | orchestrator | 2026-02-02 00:57:26.372901 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-02 00:57:26.372911 | orchestrator | Monday 02 February 2026 00:52:43 +0000 (0:00:03.610) 0:02:27.528 ******* 2026-02-02 00:57:26.372921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 00:57:26.372938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 00:57:26.372948 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.372959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 00:57:26.372973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 00:57:26.372984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 00:57:26.372994 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.373004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 00:57:26.373014 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.373024 | orchestrator | 2026-02-02 00:57:26.373034 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-02 00:57:26.373048 | orchestrator | Monday 02 February 2026 00:52:47 +0000 (0:00:03.770) 0:02:31.298 ******* 2026-02-02 00:57:26.373058 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.373068 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.373078 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.373087 | orchestrator | 2026-02-02 00:57:26.373097 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-02 00:57:26.373107 | orchestrator | Monday 02 February 2026 00:52:48 +0000 (0:00:01.255) 0:02:32.554 ******* 2026-02-02 00:57:26.373117 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.373127 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.373136 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.373146 | orchestrator | 2026-02-02 00:57:26.373156 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-02 00:57:26.373165 | orchestrator | Monday 02 February 2026 00:52:50 +0000 (0:00:02.092) 0:02:34.647 ******* 2026-02-02 00:57:26.373175 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.373185 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.373195 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.373204 | orchestrator | 2026-02-02 00:57:26.373237 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-02 00:57:26.373253 | orchestrator | Monday 02 February 2026 00:52:50 +0000 (0:00:00.324) 0:02:34.971 ******* 2026-02-02 00:57:26.373269 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.373286 | orchestrator | 2026-02-02 00:57:26.373303 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-02 00:57:26.373319 | orchestrator | Monday 02 February 2026 00:52:51 +0000 (0:00:00.860) 0:02:35.832 ******* 2026-02-02 00:57:26.373339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.373356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.373366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.373383 | orchestrator | 2026-02-02 00:57:26.373393 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-02 00:57:26.373403 | orchestrator | Monday 02 February 2026 00:52:55 +0000 (0:00:03.867) 0:02:39.699 ******* 2026-02-02 00:57:26.373413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.373424 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.373434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.373444 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.373462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.373472 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.373482 | orchestrator | 2026-02-02 00:57:26.373492 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-02 00:57:26.373502 | orchestrator | Monday 02 February 2026 00:52:56 +0000 (0:00:00.437) 0:02:40.136 ******* 2026-02-02 00:57:26.373512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.373530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.373540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.373551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.373566 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.373576 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.373586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.373596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.373606 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.373616 | orchestrator | 2026-02-02 00:57:26.373625 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-02 00:57:26.373635 | orchestrator | Monday 02 February 2026 00:52:56 +0000 (0:00:00.913) 0:02:41.050 ******* 2026-02-02 00:57:26.373645 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.373654 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.373700 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.373710 | orchestrator | 2026-02-02 00:57:26.373720 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-02 00:57:26.373730 | orchestrator | Monday 02 February 2026 00:52:58 +0000 (0:00:01.584) 0:02:42.635 ******* 2026-02-02 00:57:26.373740 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.373749 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.373759 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.373769 | orchestrator | 2026-02-02 00:57:26.373779 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-02 00:57:26.373788 | orchestrator | Monday 02 February 2026 00:53:00 +0000 (0:00:02.201) 0:02:44.836 ******* 2026-02-02 00:57:26.373798 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.373808 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.373817 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.373827 | orchestrator | 2026-02-02 00:57:26.373837 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-02 00:57:26.373846 | orchestrator | Monday 02 February 2026 00:53:01 +0000 (0:00:00.423) 0:02:45.259 ******* 2026-02-02 00:57:26.373856 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.373866 | orchestrator | 2026-02-02 00:57:26.373876 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-02 00:57:26.373885 | orchestrator | Monday 02 February 2026 00:53:02 +0000 (0:00:01.246) 0:02:46.506 ******* 2026-02-02 00:57:26.373910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 00:57:26.373929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 00:57:26.373954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 00:57:26.373971 | orchestrator | 2026-02-02 00:57:26.373982 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-02 00:57:26.373992 | orchestrator | Monday 02 February 2026 00:53:06 +0000 (0:00:04.240) 0:02:50.746 ******* 2026-02-02 00:57:26.374009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 00:57:26.374055 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.374075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 00:57:26.374093 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.374114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 00:57:26.374134 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.374144 | orchestrator | 2026-02-02 00:57:26.374154 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-02 00:57:26.374164 | orchestrator | Monday 02 February 2026 00:53:07 +0000 (0:00:00.922) 0:02:51.668 ******* 2026-02-02 00:57:26.374180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-02 00:57:26.374192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 00:57:26.374205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-02 00:57:26.374257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-02 00:57:26.374269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 00:57:26.374279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 00:57:26.374290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-02 00:57:26.374300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-02 00:57:26.374310 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.374321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 00:57:26.374331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-02 00:57:26.374341 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.374364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-02 00:57:26.374374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 00:57:26.374384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-02 00:57:26.374399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 00:57:26.374409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-02 00:57:26.374419 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.374429 | orchestrator | 2026-02-02 00:57:26.374439 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-02 00:57:26.374449 | orchestrator | Monday 02 February 2026 00:53:09 +0000 (0:00:01.401) 0:02:53.069 ******* 2026-02-02 00:57:26.374459 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.374469 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.374479 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.374488 | orchestrator | 2026-02-02 00:57:26.374498 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-02 00:57:26.374508 | orchestrator | Monday 02 February 2026 00:53:10 +0000 (0:00:01.515) 0:02:54.585 ******* 2026-02-02 00:57:26.374518 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.374527 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.374537 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.374547 | orchestrator | 2026-02-02 00:57:26.374557 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-02 00:57:26.374566 | orchestrator | Monday 02 February 2026 00:53:12 +0000 (0:00:01.816) 0:02:56.401 ******* 2026-02-02 00:57:26.374576 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.374586 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.374596 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.374605 | orchestrator | 2026-02-02 00:57:26.374615 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-02 00:57:26.374625 | orchestrator | Monday 02 February 2026 00:53:12 +0000 (0:00:00.268) 0:02:56.669 ******* 2026-02-02 00:57:26.374635 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.374644 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.374654 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.374664 | orchestrator | 2026-02-02 00:57:26.374673 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-02 00:57:26.374683 | orchestrator | Monday 02 February 2026 00:53:12 +0000 (0:00:00.274) 0:02:56.943 ******* 2026-02-02 00:57:26.374693 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.374703 | orchestrator | 2026-02-02 00:57:26.374713 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-02 00:57:26.374728 | orchestrator | Monday 02 February 2026 00:53:14 +0000 (0:00:01.302) 0:02:58.246 ******* 2026-02-02 00:57:26.374740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 00:57:26.374767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 00:57:26.374784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 00:57:26.374796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 00:57:26.374807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 00:57:26.374822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 00:57:26.374840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 00:57:26.374856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 00:57:26.374867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 00:57:26.374877 | orchestrator | 2026-02-02 00:57:26.374887 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-02 00:57:26.374897 | orchestrator | Monday 02 February 2026 00:53:17 +0000 (0:00:03.435) 0:03:01.682 ******* 2026-02-02 00:57:26.374908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 00:57:26.374924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 00:57:26.374941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 00:57:26.374956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 00:57:26.374967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 00:57:26.374977 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.374987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 00:57:26.375002 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.375013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 00:57:26.375028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 00:57:26.375053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 00:57:26.375070 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.375086 | orchestrator | 2026-02-02 00:57:26.375104 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-02 00:57:26.375121 | orchestrator | Monday 02 February 2026 00:53:18 +0000 (0:00:01.095) 0:03:02.777 ******* 2026-02-02 00:57:26.375140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-02 00:57:26.375151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-02 00:57:26.375162 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.375172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-02 00:57:26.375182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-02 00:57:26.375198 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.375208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-02 00:57:26.375244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-02 00:57:26.375255 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.375265 | orchestrator | 2026-02-02 00:57:26.375275 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-02 00:57:26.375285 | orchestrator | Monday 02 February 2026 00:53:20 +0000 (0:00:02.247) 0:03:05.024 ******* 2026-02-02 00:57:26.375295 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.375305 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.375314 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.375324 | orchestrator | 2026-02-02 00:57:26.375334 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-02 00:57:26.375344 | orchestrator | Monday 02 February 2026 00:53:22 +0000 (0:00:01.530) 0:03:06.554 ******* 2026-02-02 00:57:26.375354 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.375363 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.375373 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.375383 | orchestrator | 2026-02-02 00:57:26.375392 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-02 00:57:26.375402 | orchestrator | Monday 02 February 2026 00:53:25 +0000 (0:00:02.840) 0:03:09.395 ******* 2026-02-02 00:57:26.375412 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.375422 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.375432 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.375441 | orchestrator | 2026-02-02 00:57:26.375452 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-02 00:57:26.375461 | orchestrator | Monday 02 February 2026 00:53:25 +0000 (0:00:00.443) 0:03:09.839 ******* 2026-02-02 00:57:26.375471 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.375481 | orchestrator | 2026-02-02 00:57:26.375491 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-02 00:57:26.375501 | orchestrator | Monday 02 February 2026 00:53:26 +0000 (0:00:00.978) 0:03:10.817 ******* 2026-02-02 00:57:26.375520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.375540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.375559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.375571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.375582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.375599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.375615 | orchestrator | 2026-02-02 00:57:26.375625 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-02 00:57:26.375635 | orchestrator | Monday 02 February 2026 00:53:30 +0000 (0:00:03.411) 0:03:14.229 ******* 2026-02-02 00:57:26.375646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.375683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.375694 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.375706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.375723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.375734 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.375748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.375765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.375776 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.375785 | orchestrator | 2026-02-02 00:57:26.375795 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-02 00:57:26.375806 | orchestrator | Monday 02 February 2026 00:53:30 +0000 (0:00:00.700) 0:03:14.929 ******* 2026-02-02 00:57:26.375815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.375826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.375836 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.375845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.375856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.375866 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.375876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.375886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.375896 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.375906 | orchestrator | 2026-02-02 00:57:26.375916 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-02 00:57:26.375925 | orchestrator | Monday 02 February 2026 00:53:31 +0000 (0:00:00.807) 0:03:15.736 ******* 2026-02-02 00:57:26.375946 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.375956 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.375966 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.375976 | orchestrator | 2026-02-02 00:57:26.375986 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-02 00:57:26.375995 | orchestrator | Monday 02 February 2026 00:53:33 +0000 (0:00:01.410) 0:03:17.147 ******* 2026-02-02 00:57:26.376005 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.376015 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.376025 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.376035 | orchestrator | 2026-02-02 00:57:26.376045 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-02 00:57:26.376055 | orchestrator | Monday 02 February 2026 00:53:35 +0000 (0:00:02.106) 0:03:19.254 ******* 2026-02-02 00:57:26.376065 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.376074 | orchestrator | 2026-02-02 00:57:26.376084 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-02 00:57:26.376094 | orchestrator | Monday 02 February 2026 00:53:36 +0000 (0:00:01.016) 0:03:20.270 ******* 2026-02-02 00:57:26.376109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.376120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.376178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.376276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376326 | orchestrator | 2026-02-02 00:57:26.376336 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-02 00:57:26.376346 | orchestrator | Monday 02 February 2026 00:53:41 +0000 (0:00:04.927) 0:03:25.197 ******* 2026-02-02 00:57:26.376356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.376367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.376427 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.376437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376469 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.376479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.376499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.376531 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.376539 | orchestrator | 2026-02-02 00:57:26.376548 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-02 00:57:26.376556 | orchestrator | Monday 02 February 2026 00:53:42 +0000 (0:00:00.893) 0:03:26.091 ******* 2026-02-02 00:57:26.376564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.376573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.376581 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.376590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.376598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.376611 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.376619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.376627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.376636 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.376644 | orchestrator | 2026-02-02 00:57:26.376652 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-02 00:57:26.376660 | orchestrator | Monday 02 February 2026 00:53:42 +0000 (0:00:00.851) 0:03:26.942 ******* 2026-02-02 00:57:26.376668 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.376676 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.376684 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.376692 | orchestrator | 2026-02-02 00:57:26.376701 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-02 00:57:26.376709 | orchestrator | Monday 02 February 2026 00:53:44 +0000 (0:00:01.192) 0:03:28.134 ******* 2026-02-02 00:57:26.376717 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.376725 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.376733 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.376741 | orchestrator | 2026-02-02 00:57:26.376749 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-02 00:57:26.376757 | orchestrator | Monday 02 February 2026 00:53:46 +0000 (0:00:02.317) 0:03:30.452 ******* 2026-02-02 00:57:26.376769 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.376778 | orchestrator | 2026-02-02 00:57:26.376786 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-02 00:57:26.376795 | orchestrator | Monday 02 February 2026 00:53:47 +0000 (0:00:01.184) 0:03:31.636 ******* 2026-02-02 00:57:26.376803 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 00:57:26.376811 | orchestrator | 2026-02-02 00:57:26.376819 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-02 00:57:26.376827 | orchestrator | Monday 02 February 2026 00:53:51 +0000 (0:00:03.573) 0:03:35.210 ******* 2026-02-02 00:57:26.376840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 00:57:26.376854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 00:57:26.376863 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.376877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 00:57:26.376891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 00:57:26.376900 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.376909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 00:57:26.376923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 00:57:26.376932 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.376940 | orchestrator | 2026-02-02 00:57:26.376948 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-02 00:57:26.376956 | orchestrator | Monday 02 February 2026 00:53:56 +0000 (0:00:04.972) 0:03:40.183 ******* 2026-02-02 00:57:26.376975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 00:57:26.376989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 00:57:26.376998 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.377007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 00:57:26.377021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 00:57:26.377030 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.377042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 00:57:26.377056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 00:57:26.377064 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.377073 | orchestrator | 2026-02-02 00:57:26.377081 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-02 00:57:26.377089 | orchestrator | Monday 02 February 2026 00:53:58 +0000 (0:00:02.099) 0:03:42.282 ******* 2026-02-02 00:57:26.377098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 00:57:26.377312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 00:57:26.377330 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.377345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 00:57:26.377361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 00:57:26.377369 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.377383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 00:57:26.377400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 00:57:26.377419 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.377431 | orchestrator | 2026-02-02 00:57:26.377442 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-02 00:57:26.377452 | orchestrator | Monday 02 February 2026 00:54:00 +0000 (0:00:02.727) 0:03:45.010 ******* 2026-02-02 00:57:26.377463 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.377473 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.377483 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.377493 | orchestrator | 2026-02-02 00:57:26.377503 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-02 00:57:26.377514 | orchestrator | Monday 02 February 2026 00:54:03 +0000 (0:00:02.096) 0:03:47.106 ******* 2026-02-02 00:57:26.377525 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.377535 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.377547 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.377559 | orchestrator | 2026-02-02 00:57:26.377570 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-02 00:57:26.377581 | orchestrator | Monday 02 February 2026 00:54:04 +0000 (0:00:01.836) 0:03:48.943 ******* 2026-02-02 00:57:26.377592 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.377599 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.377606 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.377613 | orchestrator | 2026-02-02 00:57:26.377620 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-02 00:57:26.377627 | orchestrator | Monday 02 February 2026 00:54:05 +0000 (0:00:00.346) 0:03:49.290 ******* 2026-02-02 00:57:26.377634 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.377641 | orchestrator | 2026-02-02 00:57:26.377648 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-02 00:57:26.377655 | orchestrator | Monday 02 February 2026 00:54:06 +0000 (0:00:01.411) 0:03:50.702 ******* 2026-02-02 00:57:26.377669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 00:57:26.377689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 00:57:26.377697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 00:57:26.377704 | orchestrator | 2026-02-02 00:57:26.377711 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-02 00:57:26.377718 | orchestrator | Monday 02 February 2026 00:54:08 +0000 (0:00:01.513) 0:03:52.215 ******* 2026-02-02 00:57:26.377725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 00:57:26.377733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 00:57:26.377740 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.377752 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.377763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 00:57:26.377771 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.377778 | orchestrator | 2026-02-02 00:57:26.377785 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-02 00:57:26.377792 | orchestrator | Monday 02 February 2026 00:54:08 +0000 (0:00:00.432) 0:03:52.647 ******* 2026-02-02 00:57:26.377805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-02 00:57:26.377813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-02 00:57:26.377820 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.377827 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.377834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-02 00:57:26.377843 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.377854 | orchestrator | 2026-02-02 00:57:26.377864 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-02 00:57:26.377871 | orchestrator | Monday 02 February 2026 00:54:09 +0000 (0:00:00.975) 0:03:53.623 ******* 2026-02-02 00:57:26.377878 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.377884 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.377891 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.377898 | orchestrator | 2026-02-02 00:57:26.377905 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-02 00:57:26.377912 | orchestrator | Monday 02 February 2026 00:54:10 +0000 (0:00:00.512) 0:03:54.135 ******* 2026-02-02 00:57:26.377919 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.377925 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.377932 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.377939 | orchestrator | 2026-02-02 00:57:26.377946 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-02 00:57:26.377953 | orchestrator | Monday 02 February 2026 00:54:11 +0000 (0:00:01.545) 0:03:55.680 ******* 2026-02-02 00:57:26.377959 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.377967 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.377973 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.377981 | orchestrator | 2026-02-02 00:57:26.377988 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-02 00:57:26.377994 | orchestrator | Monday 02 February 2026 00:54:11 +0000 (0:00:00.335) 0:03:56.016 ******* 2026-02-02 00:57:26.378001 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.378008 | orchestrator | 2026-02-02 00:57:26.378050 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-02 00:57:26.378064 | orchestrator | Monday 02 February 2026 00:54:13 +0000 (0:00:01.581) 0:03:57.597 ******* 2026-02-02 00:57:26.378073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.378088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-02 00:57:26.378107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-02 00:57:26.378115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 00:57:26.378159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 00:57:26.378166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.378180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-02 00:57:26.378239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-02 00:57:26.378247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-02 00:57:26.378269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 00:57:26.378298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 00:57:26.378306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 00:57:26.378335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 00:57:26.378342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-02 00:57:26.378360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.378372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 00:57:26.378416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-02 00:57:26.378427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 00:57:26.378434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-02 00:57:26.378446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 00:57:26.378485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 00:57:26.378493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-02 00:57:26.378512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 00:57:26.378549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 00:57:26.378557 | orchestrator | 2026-02-02 00:57:26.378564 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-02 00:57:26.378571 | orchestrator | Monday 02 February 2026 00:54:18 +0000 (0:00:05.104) 0:04:02.701 ******* 2026-02-02 00:57:26.378578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.378590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-02 00:57:26.378613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-02 00:57:26.378620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 00:57:26.378656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 00:57:26.378669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.378680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-02 00:57:26.378710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-02 00:57:26.378723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-02 00:57:26.378738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 00:57:26.378771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 00:57:26.378786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378793 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.378800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 00:57:26.378807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 00:57:26.378818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-02 00:57:26.378841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 00:57:26.378863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 00:57:26.378870 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.378885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.378897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-02 00:57:26.378912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-02 00:57:26.378925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.378933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.378954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 00:57:26.379015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 00:57:26.379033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.379041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-02 00:57:26.379053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 00:57:26.379069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.379077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 00:57:26.379085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 00:57:26.379092 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.379099 | orchestrator | 2026-02-02 00:57:26.379106 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-02 00:57:26.379113 | orchestrator | Monday 02 February 2026 00:54:20 +0000 (0:00:02.243) 0:04:04.944 ******* 2026-02-02 00:57:26.379121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.379129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.379136 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.379143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.379150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.379157 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.379164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.379180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.379187 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.379194 | orchestrator | 2026-02-02 00:57:26.379201 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-02 00:57:26.379208 | orchestrator | Monday 02 February 2026 00:54:23 +0000 (0:00:02.339) 0:04:07.284 ******* 2026-02-02 00:57:26.379233 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.379241 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.379248 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.379254 | orchestrator | 2026-02-02 00:57:26.379262 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-02 00:57:26.379269 | orchestrator | Monday 02 February 2026 00:54:24 +0000 (0:00:01.369) 0:04:08.654 ******* 2026-02-02 00:57:26.379275 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.379282 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.379289 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.379296 | orchestrator | 2026-02-02 00:57:26.379306 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-02 00:57:26.379313 | orchestrator | Monday 02 February 2026 00:54:26 +0000 (0:00:02.153) 0:04:10.808 ******* 2026-02-02 00:57:26.379320 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.379326 | orchestrator | 2026-02-02 00:57:26.379333 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-02 00:57:26.379340 | orchestrator | Monday 02 February 2026 00:54:28 +0000 (0:00:01.614) 0:04:12.422 ******* 2026-02-02 00:57:26.379348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 00:57:26.379356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 00:57:26.379374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 00:57:26.379382 | orchestrator | 2026-02-02 00:57:26.379389 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-02 00:57:26.379396 | orchestrator | Monday 02 February 2026 00:54:32 +0000 (0:00:04.619) 0:04:17.042 ******* 2026-02-02 00:57:26.379407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 00:57:26.379416 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.379424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 00:57:26.379432 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.379439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 00:57:26.379451 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.379458 | orchestrator | 2026-02-02 00:57:26.379465 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-02 00:57:26.379472 | orchestrator | Monday 02 February 2026 00:54:33 +0000 (0:00:00.528) 0:04:17.571 ******* 2026-02-02 00:57:26.379479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 00:57:26.379492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'plac2026-02-02 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:26.379499 | orchestrator | ement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 00:57:26.379507 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.379514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 00:57:26.379525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 00:57:26.379532 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.379539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 00:57:26.379546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 00:57:26.379553 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.379561 | orchestrator | 2026-02-02 00:57:26.379568 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-02 00:57:26.379574 | orchestrator | Monday 02 February 2026 00:54:34 +0000 (0:00:01.318) 0:04:18.889 ******* 2026-02-02 00:57:26.379582 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.379588 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.379595 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.379602 | orchestrator | 2026-02-02 00:57:26.379609 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-02 00:57:26.379616 | orchestrator | Monday 02 February 2026 00:54:36 +0000 (0:00:01.580) 0:04:20.470 ******* 2026-02-02 00:57:26.379623 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.379630 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.379636 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.379644 | orchestrator | 2026-02-02 00:57:26.379650 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-02 00:57:26.379662 | orchestrator | Monday 02 February 2026 00:54:38 +0000 (0:00:02.222) 0:04:22.692 ******* 2026-02-02 00:57:26.379669 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.379676 | orchestrator | 2026-02-02 00:57:26.379683 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-02 00:57:26.379690 | orchestrator | Monday 02 February 2026 00:54:39 +0000 (0:00:01.369) 0:04:24.062 ******* 2026-02-02 00:57:26.379697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.379711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.379723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.379732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.379745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.379752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.379764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.379895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.379907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.379921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.379928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.379936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.379943 | orchestrator | 2026-02-02 00:57:26.379950 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-02 00:57:26.379957 | orchestrator | Monday 02 February 2026 00:54:46 +0000 (0:00:06.172) 0:04:30.234 ******* 2026-02-02 00:57:26.379987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.379996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.380009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.380017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.380024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.380051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.380064 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.380072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.380079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.380087 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.380099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.380113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.380138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.380152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.380159 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.380166 | orchestrator | 2026-02-02 00:57:26.380173 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-02 00:57:26.380180 | orchestrator | Monday 02 February 2026 00:54:47 +0000 (0:00:01.181) 0:04:31.416 ******* 2026-02-02 00:57:26.380187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.380195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.380202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.380209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.380269 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.380277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.380285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.380291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.380303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.380316 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.380328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.380344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.380368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.380404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.380415 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.380426 | orchestrator | 2026-02-02 00:57:26.380436 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-02 00:57:26.380447 | orchestrator | Monday 02 February 2026 00:54:49 +0000 (0:00:01.812) 0:04:33.228 ******* 2026-02-02 00:57:26.380457 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.380467 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.380477 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.380488 | orchestrator | 2026-02-02 00:57:26.380500 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-02 00:57:26.380510 | orchestrator | Monday 02 February 2026 00:54:50 +0000 (0:00:01.780) 0:04:35.008 ******* 2026-02-02 00:57:26.380522 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.380533 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.380545 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.380553 | orchestrator | 2026-02-02 00:57:26.380560 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-02 00:57:26.380566 | orchestrator | Monday 02 February 2026 00:54:52 +0000 (0:00:02.006) 0:04:37.015 ******* 2026-02-02 00:57:26.380573 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.380580 | orchestrator | 2026-02-02 00:57:26.380586 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-02 00:57:26.380593 | orchestrator | Monday 02 February 2026 00:54:54 +0000 (0:00:01.691) 0:04:38.707 ******* 2026-02-02 00:57:26.380600 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-02 00:57:26.380607 | orchestrator | 2026-02-02 00:57:26.380614 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-02 00:57:26.380620 | orchestrator | Monday 02 February 2026 00:54:55 +0000 (0:00:01.179) 0:04:39.886 ******* 2026-02-02 00:57:26.380628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-02 00:57:26.380636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-02 00:57:26.380643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-02 00:57:26.380657 | orchestrator | 2026-02-02 00:57:26.380664 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-02 00:57:26.380671 | orchestrator | Monday 02 February 2026 00:55:00 +0000 (0:00:04.556) 0:04:44.443 ******* 2026-02-02 00:57:26.380682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 00:57:26.380690 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.380719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 00:57:26.380727 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.380735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 00:57:26.380742 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.380749 | orchestrator | 2026-02-02 00:57:26.380755 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-02 00:57:26.380762 | orchestrator | Monday 02 February 2026 00:55:03 +0000 (0:00:03.175) 0:04:47.622 ******* 2026-02-02 00:57:26.380769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 00:57:26.380776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 00:57:26.380783 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.380789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 00:57:26.380796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 00:57:26.380803 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.380809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 00:57:26.380821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 00:57:26.380828 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.380834 | orchestrator | 2026-02-02 00:57:26.380841 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-02 00:57:26.380847 | orchestrator | Monday 02 February 2026 00:55:06 +0000 (0:00:03.039) 0:04:50.662 ******* 2026-02-02 00:57:26.380853 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.380860 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.380866 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.380872 | orchestrator | 2026-02-02 00:57:26.380879 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-02 00:57:26.380885 | orchestrator | Monday 02 February 2026 00:55:09 +0000 (0:00:03.214) 0:04:53.876 ******* 2026-02-02 00:57:26.380891 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.380898 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.380904 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.380910 | orchestrator | 2026-02-02 00:57:26.380917 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-02 00:57:26.380923 | orchestrator | Monday 02 February 2026 00:55:12 +0000 (0:00:02.654) 0:04:56.531 ******* 2026-02-02 00:57:26.380931 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-02 00:57:26.380937 | orchestrator | 2026-02-02 00:57:26.380943 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-02 00:57:26.380950 | orchestrator | Monday 02 February 2026 00:55:13 +0000 (0:00:01.355) 0:04:57.886 ******* 2026-02-02 00:57:26.380960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 00:57:26.380968 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.380989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 00:57:26.380997 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.381003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 00:57:26.381010 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.381016 | orchestrator | 2026-02-02 00:57:26.381022 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-02 00:57:26.381034 | orchestrator | Monday 02 February 2026 00:55:15 +0000 (0:00:01.468) 0:04:59.355 ******* 2026-02-02 00:57:26.381041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 00:57:26.381047 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.381054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 00:57:26.381060 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.381067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 00:57:26.381073 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.381080 | orchestrator | 2026-02-02 00:57:26.381086 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-02 00:57:26.381093 | orchestrator | Monday 02 February 2026 00:55:17 +0000 (0:00:01.708) 0:05:01.064 ******* 2026-02-02 00:57:26.381099 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.381105 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.381112 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.381118 | orchestrator | 2026-02-02 00:57:26.381124 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-02 00:57:26.381130 | orchestrator | Monday 02 February 2026 00:55:18 +0000 (0:00:01.673) 0:05:02.738 ******* 2026-02-02 00:57:26.381137 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.381144 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.381150 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.381156 | orchestrator | 2026-02-02 00:57:26.381165 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-02 00:57:26.381172 | orchestrator | Monday 02 February 2026 00:55:21 +0000 (0:00:02.597) 0:05:05.335 ******* 2026-02-02 00:57:26.381178 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.381184 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.381190 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.381197 | orchestrator | 2026-02-02 00:57:26.381203 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-02 00:57:26.381209 | orchestrator | Monday 02 February 2026 00:55:24 +0000 (0:00:02.963) 0:05:08.299 ******* 2026-02-02 00:57:26.381251 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-02 00:57:26.381259 | orchestrator | 2026-02-02 00:57:26.381265 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-02 00:57:26.381271 | orchestrator | Monday 02 February 2026 00:55:25 +0000 (0:00:00.779) 0:05:09.078 ******* 2026-02-02 00:57:26.381278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 00:57:26.381289 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.381296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 00:57:26.381303 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.381309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 00:57:26.381316 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.381322 | orchestrator | 2026-02-02 00:57:26.381328 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-02 00:57:26.381335 | orchestrator | Monday 02 February 2026 00:55:26 +0000 (0:00:01.417) 0:05:10.495 ******* 2026-02-02 00:57:26.381341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 00:57:26.381348 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.381354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 00:57:26.381361 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.381374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 00:57:26.381380 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.381387 | orchestrator | 2026-02-02 00:57:26.381393 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-02 00:57:26.381404 | orchestrator | Monday 02 February 2026 00:55:27 +0000 (0:00:01.032) 0:05:11.527 ******* 2026-02-02 00:57:26.381410 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.381417 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.381438 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.381444 | orchestrator | 2026-02-02 00:57:26.381451 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-02 00:57:26.381457 | orchestrator | Monday 02 February 2026 00:55:28 +0000 (0:00:01.519) 0:05:13.047 ******* 2026-02-02 00:57:26.381464 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.381470 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.381476 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.381482 | orchestrator | 2026-02-02 00:57:26.381489 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-02 00:57:26.381495 | orchestrator | Monday 02 February 2026 00:55:31 +0000 (0:00:02.622) 0:05:15.669 ******* 2026-02-02 00:57:26.381502 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.381508 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.381514 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.381520 | orchestrator | 2026-02-02 00:57:26.381527 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-02 00:57:26.381533 | orchestrator | Monday 02 February 2026 00:55:34 +0000 (0:00:03.149) 0:05:18.819 ******* 2026-02-02 00:57:26.381539 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.381546 | orchestrator | 2026-02-02 00:57:26.381552 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-02 00:57:26.381559 | orchestrator | Monday 02 February 2026 00:55:36 +0000 (0:00:01.691) 0:05:20.510 ******* 2026-02-02 00:57:26.381565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 00:57:26.381573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 00:57:26.381579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.381590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.381616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.381624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 00:57:26.381631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 00:57:26.381638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.381644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.381654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.381678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 00:57:26.381685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 00:57:26.381692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.381699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.381705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.381718 | orchestrator | 2026-02-02 00:57:26.381725 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-02 00:57:26.381731 | orchestrator | Monday 02 February 2026 00:55:40 +0000 (0:00:03.967) 0:05:24.478 ******* 2026-02-02 00:57:26.381741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 00:57:26.381763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 00:57:26.381770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.381777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.381783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.381790 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.381796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 00:57:26.381810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 00:57:26.381830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.381838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.381844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.381851 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.381857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 00:57:26.381868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 00:57:26.381878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.381900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 00:57:26.381907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 00:57:26.381914 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.381920 | orchestrator | 2026-02-02 00:57:26.381927 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-02 00:57:26.381934 | orchestrator | Monday 02 February 2026 00:55:41 +0000 (0:00:01.079) 0:05:25.558 ******* 2026-02-02 00:57:26.381940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 00:57:26.381947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 00:57:26.381954 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.381960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 00:57:26.381967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 00:57:26.381973 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.381984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 00:57:26.381991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 00:57:26.381997 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.382003 | orchestrator | 2026-02-02 00:57:26.382010 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-02 00:57:26.382038 | orchestrator | Monday 02 February 2026 00:55:42 +0000 (0:00:01.368) 0:05:26.926 ******* 2026-02-02 00:57:26.382046 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.382053 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.382059 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.382066 | orchestrator | 2026-02-02 00:57:26.382072 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-02 00:57:26.382078 | orchestrator | Monday 02 February 2026 00:55:44 +0000 (0:00:01.282) 0:05:28.209 ******* 2026-02-02 00:57:26.382085 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.382091 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.382097 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.382104 | orchestrator | 2026-02-02 00:57:26.382110 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-02 00:57:26.382116 | orchestrator | Monday 02 February 2026 00:55:46 +0000 (0:00:02.265) 0:05:30.474 ******* 2026-02-02 00:57:26.382123 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.382129 | orchestrator | 2026-02-02 00:57:26.382135 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-02 00:57:26.382141 | orchestrator | Monday 02 February 2026 00:55:48 +0000 (0:00:01.979) 0:05:32.454 ******* 2026-02-02 00:57:26.382168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.382177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.382184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.382207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:57:26.382260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:57:26.382271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:57:26.382283 | orchestrator | 2026-02-02 00:57:26.382290 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-02 00:57:26.382296 | orchestrator | Monday 02 February 2026 00:55:53 +0000 (0:00:05.202) 0:05:37.656 ******* 2026-02-02 00:57:26.382303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.382310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 00:57:26.382317 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.382342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.382350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 00:57:26.382361 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.382369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.382376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 00:57:26.382386 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.382393 | orchestrator | 2026-02-02 00:57:26.382400 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-02 00:57:26.382406 | orchestrator | Monday 02 February 2026 00:55:54 +0000 (0:00:00.674) 0:05:38.331 ******* 2026-02-02 00:57:26.382413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.382433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-02 00:57:26.382441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-02 00:57:26.382453 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.382459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.382466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-02 00:57:26.382473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-02 00:57:26.382479 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.382485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.382492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-02 00:57:26.382499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-02 00:57:26.382505 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.382511 | orchestrator | 2026-02-02 00:57:26.382518 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-02 00:57:26.382524 | orchestrator | Monday 02 February 2026 00:55:55 +0000 (0:00:01.718) 0:05:40.050 ******* 2026-02-02 00:57:26.382530 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.382537 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.382543 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.382549 | orchestrator | 2026-02-02 00:57:26.382556 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-02 00:57:26.382562 | orchestrator | Monday 02 February 2026 00:55:56 +0000 (0:00:00.472) 0:05:40.522 ******* 2026-02-02 00:57:26.382568 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.382574 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.382581 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.382587 | orchestrator | 2026-02-02 00:57:26.382593 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-02 00:57:26.382600 | orchestrator | Monday 02 February 2026 00:55:57 +0000 (0:00:01.424) 0:05:41.947 ******* 2026-02-02 00:57:26.382606 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.382612 | orchestrator | 2026-02-02 00:57:26.382618 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-02 00:57:26.382625 | orchestrator | Monday 02 February 2026 00:55:59 +0000 (0:00:01.504) 0:05:43.452 ******* 2026-02-02 00:57:26.382649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-02 00:57:26.382662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 00:57:26.382669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.382676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.382683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 00:57:26.382690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-02 00:57:26.382700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 00:57:26.382724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.382732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.382739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 00:57:26.382746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-02 00:57:26.382753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 00:57:26.382760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.382787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.382795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 00:57:26.382802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.382810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-02 00:57:26.382817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.382824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.382837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 00:57:26.382860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.382869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-02 00:57:26.382877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:57:26.382884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.382933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-02 00:57:26.382942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.382948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.382955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.382962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 00:57:26.382969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 00:57:26.382976 | orchestrator | 2026-02-02 00:57:26.382982 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-02 00:57:26.382994 | orchestrator | Monday 02 February 2026 00:56:04 +0000 (0:00:04.987) 0:05:48.439 ******* 2026-02-02 00:57:26.383017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-02 00:57:26.383025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 00:57:26.383032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.383039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.383046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 00:57:26.383053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.383067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-02 00:57:26.383078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-02 00:57:26.383086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.383093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 00:57:26.383099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.383110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 00:57:26.383117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.383123 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.383134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.383142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-02 00:57:26.383149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 00:57:26.383156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 00:57:26.383203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.383243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.383256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-02 00:57:26.383263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.383270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.383276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 00:57:26.383283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.383299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:57:26.383310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 00:57:26.383317 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.383324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-02 00:57:26.383331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.383337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 00:57:26.383348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 00:57:26.383355 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.383361 | orchestrator | 2026-02-02 00:57:26.383368 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-02 00:57:26.383375 | orchestrator | Monday 02 February 2026 00:56:05 +0000 (0:00:00.939) 0:05:49.378 ******* 2026-02-02 00:57:26.383381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-02 00:57:26.383388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-02 00:57:26.383400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-02 00:57:26.383410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.383417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-02 00:57:26.383423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.383430 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.383437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.383443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.383455 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.383461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-02 00:57:26.383468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-02 00:57:26.383475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.383481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-02 00:57:26.383487 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.383494 | orchestrator | 2026-02-02 00:57:26.383500 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-02 00:57:26.383507 | orchestrator | Monday 02 February 2026 00:56:06 +0000 (0:00:01.227) 0:05:50.606 ******* 2026-02-02 00:57:26.383513 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.383519 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.383526 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.383532 | orchestrator | 2026-02-02 00:57:26.383538 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-02 00:57:26.383545 | orchestrator | Monday 02 February 2026 00:56:07 +0000 (0:00:00.854) 0:05:51.461 ******* 2026-02-02 00:57:26.383554 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.383560 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.383567 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.383573 | orchestrator | 2026-02-02 00:57:26.383579 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-02 00:57:26.383585 | orchestrator | Monday 02 February 2026 00:56:08 +0000 (0:00:01.406) 0:05:52.867 ******* 2026-02-02 00:57:26.383592 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.383598 | orchestrator | 2026-02-02 00:57:26.383605 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-02 00:57:26.383615 | orchestrator | Monday 02 February 2026 00:56:10 +0000 (0:00:01.469) 0:05:54.337 ******* 2026-02-02 00:57:26.383621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:57:26.383633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:57:26.383641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 00:57:26.383648 | orchestrator | 2026-02-02 00:57:26.383655 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-02 00:57:26.383661 | orchestrator | Monday 02 February 2026 00:56:13 +0000 (0:00:02.837) 0:05:57.174 ******* 2026-02-02 00:57:26.383674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 00:57:26.383681 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.383688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 00:57:26.383699 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.383706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 00:57:26.383712 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.383719 | orchestrator | 2026-02-02 00:57:26.383725 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-02 00:57:26.383732 | orchestrator | Monday 02 February 2026 00:56:13 +0000 (0:00:00.821) 0:05:57.995 ******* 2026-02-02 00:57:26.383738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-02 00:57:26.383745 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.383751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-02 00:57:26.383757 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.383763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-02 00:57:26.383770 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.383776 | orchestrator | 2026-02-02 00:57:26.383782 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-02 00:57:26.383788 | orchestrator | Monday 02 February 2026 00:56:14 +0000 (0:00:00.676) 0:05:58.672 ******* 2026-02-02 00:57:26.383795 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.383801 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.383807 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.383813 | orchestrator | 2026-02-02 00:57:26.383819 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-02 00:57:26.383826 | orchestrator | Monday 02 February 2026 00:56:15 +0000 (0:00:00.455) 0:05:59.128 ******* 2026-02-02 00:57:26.383832 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.383839 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.383845 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.383851 | orchestrator | 2026-02-02 00:57:26.383857 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-02 00:57:26.383863 | orchestrator | Monday 02 February 2026 00:56:16 +0000 (0:00:01.528) 0:06:00.656 ******* 2026-02-02 00:57:26.383873 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.383879 | orchestrator | 2026-02-02 00:57:26.383886 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-02 00:57:26.383892 | orchestrator | Monday 02 February 2026 00:56:18 +0000 (0:00:01.889) 0:06:02.546 ******* 2026-02-02 00:57:26.383906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-02 00:57:26.383914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-02 00:57:26.383922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-02 00:57:26.383934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 00:57:26.383950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 00:57:26.383957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 00:57:26.383964 | orchestrator | 2026-02-02 00:57:26.383971 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-02 00:57:26.383978 | orchestrator | Monday 02 February 2026 00:56:24 +0000 (0:00:06.139) 0:06:08.685 ******* 2026-02-02 00:57:26.383984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-02 00:57:26.383995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 00:57:26.384006 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.384017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-02 00:57:26.384024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 00:57:26.384031 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.384038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-02 00:57:26.384049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 00:57:26.384063 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.384069 | orchestrator | 2026-02-02 00:57:26.384076 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-02 00:57:26.384082 | orchestrator | Monday 02 February 2026 00:56:25 +0000 (0:00:01.169) 0:06:09.855 ******* 2026-02-02 00:57:26.384089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-02 00:57:26.384095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-02 00:57:26.384102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 00:57:26.384109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 00:57:26.384115 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.384122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-02 00:57:26.384128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-02 00:57:26.384135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 00:57:26.384141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 00:57:26.384148 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.384154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-02 00:57:26.384160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-02 00:57:26.384170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 00:57:26.384177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 00:57:26.384183 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.384189 | orchestrator | 2026-02-02 00:57:26.384198 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-02 00:57:26.384205 | orchestrator | Monday 02 February 2026 00:56:27 +0000 (0:00:01.484) 0:06:11.339 ******* 2026-02-02 00:57:26.384211 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.384262 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.384269 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.384275 | orchestrator | 2026-02-02 00:57:26.384281 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-02 00:57:26.384288 | orchestrator | Monday 02 February 2026 00:56:28 +0000 (0:00:01.263) 0:06:12.602 ******* 2026-02-02 00:57:26.384294 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.384304 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.384310 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.384317 | orchestrator | 2026-02-02 00:57:26.384323 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-02 00:57:26.384329 | orchestrator | Monday 02 February 2026 00:56:30 +0000 (0:00:02.287) 0:06:14.890 ******* 2026-02-02 00:57:26.384336 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.384342 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.384348 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.384354 | orchestrator | 2026-02-02 00:57:26.384361 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-02 00:57:26.384367 | orchestrator | Monday 02 February 2026 00:56:31 +0000 (0:00:00.349) 0:06:15.240 ******* 2026-02-02 00:57:26.384372 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.384378 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.384384 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.384389 | orchestrator | 2026-02-02 00:57:26.384394 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-02 00:57:26.384400 | orchestrator | Monday 02 February 2026 00:56:31 +0000 (0:00:00.755) 0:06:15.995 ******* 2026-02-02 00:57:26.384406 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.384411 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.384416 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.384422 | orchestrator | 2026-02-02 00:57:26.384427 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-02 00:57:26.384433 | orchestrator | Monday 02 February 2026 00:56:32 +0000 (0:00:00.354) 0:06:16.349 ******* 2026-02-02 00:57:26.384438 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.384444 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.384450 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.384455 | orchestrator | 2026-02-02 00:57:26.384461 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-02 00:57:26.384467 | orchestrator | Monday 02 February 2026 00:56:32 +0000 (0:00:00.334) 0:06:16.684 ******* 2026-02-02 00:57:26.384472 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.384478 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.384483 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.384489 | orchestrator | 2026-02-02 00:57:26.384494 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-02 00:57:26.384504 | orchestrator | Monday 02 February 2026 00:56:32 +0000 (0:00:00.356) 0:06:17.040 ******* 2026-02-02 00:57:26.384510 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:57:26.384516 | orchestrator | 2026-02-02 00:57:26.384521 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-02 00:57:26.384527 | orchestrator | Monday 02 February 2026 00:56:34 +0000 (0:00:01.897) 0:06:18.937 ******* 2026-02-02 00:57:26.384533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.384539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.384548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 00:57:26.384558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.384564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.384570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 00:57:26.384580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.384586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.384592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 00:57:26.384598 | orchestrator | 2026-02-02 00:57:26.384604 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-02 00:57:26.384609 | orchestrator | Monday 02 February 2026 00:56:37 +0000 (0:00:02.513) 0:06:21.451 ******* 2026-02-02 00:57:26.384615 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 00:57:26.384621 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:57:26.384626 | orchestrator | } 2026-02-02 00:57:26.384632 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 00:57:26.384637 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:57:26.384643 | orchestrator | } 2026-02-02 00:57:26.384649 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 00:57:26.384654 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:57:26.384660 | orchestrator | } 2026-02-02 00:57:26.384665 | orchestrator | 2026-02-02 00:57:26.384671 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 00:57:26.384679 | orchestrator | Monday 02 February 2026 00:56:37 +0000 (0:00:00.408) 0:06:21.859 ******* 2026-02-02 00:57:26.384689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.384695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.384705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.384711 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.384717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.384723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.384729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.384734 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.384743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 00:57:26.384752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 00:57:26.384762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 00:57:26.384768 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.384774 | orchestrator | 2026-02-02 00:57:26.384779 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-02 00:57:26.384785 | orchestrator | Monday 02 February 2026 00:56:39 +0000 (0:00:01.761) 0:06:23.620 ******* 2026-02-02 00:57:26.384790 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.384796 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.384802 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.384807 | orchestrator | 2026-02-02 00:57:26.384813 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-02 00:57:26.384818 | orchestrator | Monday 02 February 2026 00:56:40 +0000 (0:00:01.145) 0:06:24.766 ******* 2026-02-02 00:57:26.384824 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.384829 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.384835 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.384840 | orchestrator | 2026-02-02 00:57:26.384846 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-02 00:57:26.384851 | orchestrator | Monday 02 February 2026 00:56:41 +0000 (0:00:00.378) 0:06:25.145 ******* 2026-02-02 00:57:26.384857 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.384862 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.384868 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.384873 | orchestrator | 2026-02-02 00:57:26.384879 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-02 00:57:26.384884 | orchestrator | Monday 02 February 2026 00:56:42 +0000 (0:00:00.992) 0:06:26.137 ******* 2026-02-02 00:57:26.384890 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.384895 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.384901 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.384906 | orchestrator | 2026-02-02 00:57:26.384911 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-02 00:57:26.384917 | orchestrator | Monday 02 February 2026 00:56:42 +0000 (0:00:00.923) 0:06:27.061 ******* 2026-02-02 00:57:26.384922 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.384928 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.384933 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.384939 | orchestrator | 2026-02-02 00:57:26.384944 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-02 00:57:26.384950 | orchestrator | Monday 02 February 2026 00:56:44 +0000 (0:00:01.413) 0:06:28.475 ******* 2026-02-02 00:57:26.384955 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.384961 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.384966 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.384972 | orchestrator | 2026-02-02 00:57:26.384978 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-02 00:57:26.384983 | orchestrator | Monday 02 February 2026 00:56:54 +0000 (0:00:10.050) 0:06:38.525 ******* 2026-02-02 00:57:26.384989 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.384994 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.385000 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.385005 | orchestrator | 2026-02-02 00:57:26.385011 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-02 00:57:26.385017 | orchestrator | Monday 02 February 2026 00:56:55 +0000 (0:00:00.806) 0:06:39.332 ******* 2026-02-02 00:57:26.385022 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.385028 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.385040 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.385045 | orchestrator | 2026-02-02 00:57:26.385051 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-02 00:57:26.385056 | orchestrator | Monday 02 February 2026 00:57:03 +0000 (0:00:08.635) 0:06:47.968 ******* 2026-02-02 00:57:26.385062 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.385067 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.385073 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.385078 | orchestrator | 2026-02-02 00:57:26.385084 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-02 00:57:26.385089 | orchestrator | Monday 02 February 2026 00:57:08 +0000 (0:00:04.835) 0:06:52.803 ******* 2026-02-02 00:57:26.385095 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:57:26.385101 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:57:26.385106 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:57:26.385112 | orchestrator | 2026-02-02 00:57:26.385120 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-02 00:57:26.385126 | orchestrator | Monday 02 February 2026 00:57:17 +0000 (0:00:09.107) 0:07:01.911 ******* 2026-02-02 00:57:26.385134 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.385143 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.385152 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.385159 | orchestrator | 2026-02-02 00:57:26.385175 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-02 00:57:26.385187 | orchestrator | Monday 02 February 2026 00:57:18 +0000 (0:00:00.387) 0:07:02.299 ******* 2026-02-02 00:57:26.385195 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.385209 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.385235 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.385243 | orchestrator | 2026-02-02 00:57:26.385252 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-02 00:57:26.385259 | orchestrator | Monday 02 February 2026 00:57:18 +0000 (0:00:00.375) 0:07:02.675 ******* 2026-02-02 00:57:26.385266 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.385274 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.385281 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.385289 | orchestrator | 2026-02-02 00:57:26.385296 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-02 00:57:26.385304 | orchestrator | Monday 02 February 2026 00:57:19 +0000 (0:00:00.804) 0:07:03.480 ******* 2026-02-02 00:57:26.385312 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.385321 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.385329 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.385338 | orchestrator | 2026-02-02 00:57:26.385346 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-02 00:57:26.385356 | orchestrator | Monday 02 February 2026 00:57:19 +0000 (0:00:00.385) 0:07:03.865 ******* 2026-02-02 00:57:26.385364 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.385373 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.385382 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.385391 | orchestrator | 2026-02-02 00:57:26.385398 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-02 00:57:26.385404 | orchestrator | Monday 02 February 2026 00:57:20 +0000 (0:00:00.360) 0:07:04.225 ******* 2026-02-02 00:57:26.385410 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:57:26.385415 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:57:26.385421 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:57:26.385426 | orchestrator | 2026-02-02 00:57:26.385431 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-02 00:57:26.385437 | orchestrator | Monday 02 February 2026 00:57:20 +0000 (0:00:00.412) 0:07:04.638 ******* 2026-02-02 00:57:26.385442 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.385448 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.385454 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.385465 | orchestrator | 2026-02-02 00:57:26.385471 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-02 00:57:26.385477 | orchestrator | Monday 02 February 2026 00:57:22 +0000 (0:00:02.023) 0:07:06.661 ******* 2026-02-02 00:57:26.385483 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:57:26.385488 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:57:26.385494 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:57:26.385499 | orchestrator | 2026-02-02 00:57:26.385505 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:57:26.385510 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-02 00:57:26.385517 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-02 00:57:26.385523 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-02 00:57:26.385528 | orchestrator | 2026-02-02 00:57:26.385534 | orchestrator | 2026-02-02 00:57:26.385539 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:57:26.385545 | orchestrator | Monday 02 February 2026 00:57:23 +0000 (0:00:00.984) 0:07:07.646 ******* 2026-02-02 00:57:26.385550 | orchestrator | =============================================================================== 2026-02-02 00:57:26.385559 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.05s 2026-02-02 00:57:26.385568 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.11s 2026-02-02 00:57:26.385578 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.64s 2026-02-02 00:57:26.385591 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.17s 2026-02-02 00:57:26.385600 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.14s 2026-02-02 00:57:26.385608 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.05s 2026-02-02 00:57:26.385616 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.56s 2026-02-02 00:57:26.385625 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 5.45s 2026-02-02 00:57:26.385633 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.20s 2026-02-02 00:57:26.385641 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.10s 2026-02-02 00:57:26.385649 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.07s 2026-02-02 00:57:26.385659 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.99s 2026-02-02 00:57:26.385668 | orchestrator | haproxy-config : Copying over mariadb haproxy config -------------------- 4.97s 2026-02-02 00:57:26.385676 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.93s 2026-02-02 00:57:26.385691 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.93s 2026-02-02 00:57:26.385701 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.84s 2026-02-02 00:57:26.385710 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.84s 2026-02-02 00:57:26.385718 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 4.62s 2026-02-02 00:57:26.385727 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.56s 2026-02-02 00:57:26.385737 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.55s 2026-02-02 00:57:29.391053 | orchestrator | 2026-02-02 00:57:29 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:57:29.392624 | orchestrator | 2026-02-02 00:57:29 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:57:29.393537 | orchestrator | 2026-02-02 00:57:29 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:29.395610 | orchestrator | 2026-02-02 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:32.434535 | orchestrator | 2026-02-02 00:57:32 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:57:32.435007 | orchestrator | 2026-02-02 00:57:32 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:57:32.436105 | orchestrator | 2026-02-02 00:57:32 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:32.436311 | orchestrator | 2026-02-02 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:35.468737 | orchestrator | 2026-02-02 00:57:35 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:57:35.469069 | orchestrator | 2026-02-02 00:57:35 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:57:35.470284 | orchestrator | 2026-02-02 00:57:35 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:35.470296 | orchestrator | 2026-02-02 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:38.508342 | orchestrator | 2026-02-02 00:57:38 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:57:38.509599 | orchestrator | 2026-02-02 00:57:38 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:57:38.510676 | orchestrator | 2026-02-02 00:57:38 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:38.510738 | orchestrator | 2026-02-02 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:41.543441 | orchestrator | 2026-02-02 00:57:41 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:57:41.543721 | orchestrator | 2026-02-02 00:57:41 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:57:41.544417 | orchestrator | 2026-02-02 00:57:41 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:41.544534 | orchestrator | 2026-02-02 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:44.597634 | orchestrator | 2026-02-02 00:57:44 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:57:44.598120 | orchestrator | 2026-02-02 00:57:44 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:57:44.598875 | orchestrator | 2026-02-02 00:57:44 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:44.598905 | orchestrator | 2026-02-02 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:47.639037 | orchestrator | 2026-02-02 00:57:47 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:57:47.639547 | orchestrator | 2026-02-02 00:57:47 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:57:47.640366 | orchestrator | 2026-02-02 00:57:47 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:47.640400 | orchestrator | 2026-02-02 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:50.675555 | orchestrator | 2026-02-02 00:57:50 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:57:50.681442 | orchestrator | 2026-02-02 00:57:50 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:57:50.682954 | orchestrator | 2026-02-02 00:57:50 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:50.683058 | orchestrator | 2026-02-02 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:53.721100 | orchestrator | 2026-02-02 00:57:53 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:57:53.722358 | orchestrator | 2026-02-02 00:57:53 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:57:53.723570 | orchestrator | 2026-02-02 00:57:53 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:53.723595 | orchestrator | 2026-02-02 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:56.764572 | orchestrator | 2026-02-02 00:57:56 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:57:56.765796 | orchestrator | 2026-02-02 00:57:56 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:57:56.771835 | orchestrator | 2026-02-02 00:57:56 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:56.771927 | orchestrator | 2026-02-02 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:57:59.807464 | orchestrator | 2026-02-02 00:57:59 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:57:59.809117 | orchestrator | 2026-02-02 00:57:59 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:57:59.811159 | orchestrator | 2026-02-02 00:57:59 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:57:59.811395 | orchestrator | 2026-02-02 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:02.869506 | orchestrator | 2026-02-02 00:58:02 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:02.874914 | orchestrator | 2026-02-02 00:58:02 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:02.877193 | orchestrator | 2026-02-02 00:58:02 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:02.877216 | orchestrator | 2026-02-02 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:05.922642 | orchestrator | 2026-02-02 00:58:05 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:05.922934 | orchestrator | 2026-02-02 00:58:05 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:05.923729 | orchestrator | 2026-02-02 00:58:05 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:05.923773 | orchestrator | 2026-02-02 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:08.974782 | orchestrator | 2026-02-02 00:58:08 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:08.976954 | orchestrator | 2026-02-02 00:58:08 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:08.981439 | orchestrator | 2026-02-02 00:58:08 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:08.981508 | orchestrator | 2026-02-02 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:12.040488 | orchestrator | 2026-02-02 00:58:12 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:12.041078 | orchestrator | 2026-02-02 00:58:12 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:12.042625 | orchestrator | 2026-02-02 00:58:12 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:12.042683 | orchestrator | 2026-02-02 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:15.092818 | orchestrator | 2026-02-02 00:58:15 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:15.096076 | orchestrator | 2026-02-02 00:58:15 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:15.099833 | orchestrator | 2026-02-02 00:58:15 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:15.100067 | orchestrator | 2026-02-02 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:18.150985 | orchestrator | 2026-02-02 00:58:18 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:18.151627 | orchestrator | 2026-02-02 00:58:18 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:18.152643 | orchestrator | 2026-02-02 00:58:18 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:18.152709 | orchestrator | 2026-02-02 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:21.188941 | orchestrator | 2026-02-02 00:58:21 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:21.190823 | orchestrator | 2026-02-02 00:58:21 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:21.194439 | orchestrator | 2026-02-02 00:58:21 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:21.194869 | orchestrator | 2026-02-02 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:24.238741 | orchestrator | 2026-02-02 00:58:24 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:24.239841 | orchestrator | 2026-02-02 00:58:24 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:24.241772 | orchestrator | 2026-02-02 00:58:24 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:24.242930 | orchestrator | 2026-02-02 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:27.292558 | orchestrator | 2026-02-02 00:58:27 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:27.294912 | orchestrator | 2026-02-02 00:58:27 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:27.296515 | orchestrator | 2026-02-02 00:58:27 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:27.296553 | orchestrator | 2026-02-02 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:30.334786 | orchestrator | 2026-02-02 00:58:30 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:30.336105 | orchestrator | 2026-02-02 00:58:30 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:30.338210 | orchestrator | 2026-02-02 00:58:30 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:30.338253 | orchestrator | 2026-02-02 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:33.379609 | orchestrator | 2026-02-02 00:58:33 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:33.382068 | orchestrator | 2026-02-02 00:58:33 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:33.384930 | orchestrator | 2026-02-02 00:58:33 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:33.384968 | orchestrator | 2026-02-02 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:36.422568 | orchestrator | 2026-02-02 00:58:36 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:36.422796 | orchestrator | 2026-02-02 00:58:36 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:36.422820 | orchestrator | 2026-02-02 00:58:36 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:36.422832 | orchestrator | 2026-02-02 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:39.469041 | orchestrator | 2026-02-02 00:58:39 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:39.469564 | orchestrator | 2026-02-02 00:58:39 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:39.471055 | orchestrator | 2026-02-02 00:58:39 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:39.471207 | orchestrator | 2026-02-02 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:42.519253 | orchestrator | 2026-02-02 00:58:42 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:42.520207 | orchestrator | 2026-02-02 00:58:42 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:42.522641 | orchestrator | 2026-02-02 00:58:42 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:42.522684 | orchestrator | 2026-02-02 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:45.570477 | orchestrator | 2026-02-02 00:58:45 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:45.572687 | orchestrator | 2026-02-02 00:58:45 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:45.574212 | orchestrator | 2026-02-02 00:58:45 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:45.574286 | orchestrator | 2026-02-02 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:48.625809 | orchestrator | 2026-02-02 00:58:48 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:48.626792 | orchestrator | 2026-02-02 00:58:48 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:48.627998 | orchestrator | 2026-02-02 00:58:48 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:48.628048 | orchestrator | 2026-02-02 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:51.693061 | orchestrator | 2026-02-02 00:58:51 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:51.695681 | orchestrator | 2026-02-02 00:58:51 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:51.698908 | orchestrator | 2026-02-02 00:58:51 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:51.699786 | orchestrator | 2026-02-02 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:54.759989 | orchestrator | 2026-02-02 00:58:54 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:54.761468 | orchestrator | 2026-02-02 00:58:54 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:54.763859 | orchestrator | 2026-02-02 00:58:54 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:54.764254 | orchestrator | 2026-02-02 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:58:57.831611 | orchestrator | 2026-02-02 00:58:57 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:58:57.834443 | orchestrator | 2026-02-02 00:58:57 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:58:57.835729 | orchestrator | 2026-02-02 00:58:57 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:58:57.835784 | orchestrator | 2026-02-02 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:00.888343 | orchestrator | 2026-02-02 00:59:00 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:00.889594 | orchestrator | 2026-02-02 00:59:00 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:00.891141 | orchestrator | 2026-02-02 00:59:00 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:59:00.891192 | orchestrator | 2026-02-02 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:03.935443 | orchestrator | 2026-02-02 00:59:03 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:03.937378 | orchestrator | 2026-02-02 00:59:03 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:03.939031 | orchestrator | 2026-02-02 00:59:03 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:59:03.939681 | orchestrator | 2026-02-02 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:06.998740 | orchestrator | 2026-02-02 00:59:07 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:07.001718 | orchestrator | 2026-02-02 00:59:07 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:07.004314 | orchestrator | 2026-02-02 00:59:07 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:59:07.004345 | orchestrator | 2026-02-02 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:10.061077 | orchestrator | 2026-02-02 00:59:10 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:10.061227 | orchestrator | 2026-02-02 00:59:10 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:10.063150 | orchestrator | 2026-02-02 00:59:10 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:59:10.063277 | orchestrator | 2026-02-02 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:13.121600 | orchestrator | 2026-02-02 00:59:13 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:13.124150 | orchestrator | 2026-02-02 00:59:13 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:13.126346 | orchestrator | 2026-02-02 00:59:13 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:59:13.126416 | orchestrator | 2026-02-02 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:16.170835 | orchestrator | 2026-02-02 00:59:16 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:16.173692 | orchestrator | 2026-02-02 00:59:16 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:16.176666 | orchestrator | 2026-02-02 00:59:16 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:59:16.176708 | orchestrator | 2026-02-02 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:19.224645 | orchestrator | 2026-02-02 00:59:19 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:19.226736 | orchestrator | 2026-02-02 00:59:19 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:19.229057 | orchestrator | 2026-02-02 00:59:19 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:59:19.229314 | orchestrator | 2026-02-02 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:22.280408 | orchestrator | 2026-02-02 00:59:22 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:22.281886 | orchestrator | 2026-02-02 00:59:22 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:22.282938 | orchestrator | 2026-02-02 00:59:22 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state STARTED 2026-02-02 00:59:22.283190 | orchestrator | 2026-02-02 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:25.334932 | orchestrator | 2026-02-02 00:59:25 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:25.337408 | orchestrator | 2026-02-02 00:59:25 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:25.340765 | orchestrator | 2026-02-02 00:59:25 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 00:59:25.349605 | orchestrator | 2026-02-02 00:59:25 | INFO  | Task 52321900-30da-46ca-9653-35a820b3b1b1 is in state SUCCESS 2026-02-02 00:59:25.350522 | orchestrator | 2026-02-02 00:59:25.353497 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-02 00:59:25.353600 | orchestrator | 2.16.14 2026-02-02 00:59:25.353618 | orchestrator | 2026-02-02 00:59:25.353631 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-02 00:59:25.353643 | orchestrator | 2026-02-02 00:59:25.353655 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 00:59:25.353667 | orchestrator | Monday 02 February 2026 00:47:36 +0000 (0:00:00.895) 0:00:00.895 ******* 2026-02-02 00:59:25.353680 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.353693 | orchestrator | 2026-02-02 00:59:25.353704 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 00:59:25.353716 | orchestrator | Monday 02 February 2026 00:47:37 +0000 (0:00:01.236) 0:00:02.132 ******* 2026-02-02 00:59:25.353727 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.353738 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.353974 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.354000 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.354098 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.354119 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.354132 | orchestrator | 2026-02-02 00:59:25.354146 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 00:59:25.354159 | orchestrator | Monday 02 February 2026 00:47:39 +0000 (0:00:01.829) 0:00:03.961 ******* 2026-02-02 00:59:25.354172 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.354184 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.354198 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.354211 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.354223 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.354235 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.354247 | orchestrator | 2026-02-02 00:59:25.354260 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 00:59:25.354272 | orchestrator | Monday 02 February 2026 00:47:40 +0000 (0:00:00.901) 0:00:04.863 ******* 2026-02-02 00:59:25.354285 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.354299 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.354310 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.354323 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.354336 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.354349 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.354469 | orchestrator | 2026-02-02 00:59:25.354492 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 00:59:25.354509 | orchestrator | Monday 02 February 2026 00:47:41 +0000 (0:00:00.958) 0:00:05.821 ******* 2026-02-02 00:59:25.354866 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.354878 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.354889 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.354900 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.354910 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.354922 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.354933 | orchestrator | 2026-02-02 00:59:25.354944 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 00:59:25.354956 | orchestrator | Monday 02 February 2026 00:47:42 +0000 (0:00:00.869) 0:00:06.691 ******* 2026-02-02 00:59:25.354967 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.354978 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.354989 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.355015 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.355026 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.355037 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.355048 | orchestrator | 2026-02-02 00:59:25.355059 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 00:59:25.355094 | orchestrator | Monday 02 February 2026 00:47:43 +0000 (0:00:00.669) 0:00:07.361 ******* 2026-02-02 00:59:25.355108 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.355119 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.355150 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.355162 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.355173 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.355184 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.355194 | orchestrator | 2026-02-02 00:59:25.355206 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 00:59:25.355217 | orchestrator | Monday 02 February 2026 00:47:44 +0000 (0:00:01.138) 0:00:08.499 ******* 2026-02-02 00:59:25.355229 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.355240 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.355252 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.355263 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.355274 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.355285 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.355296 | orchestrator | 2026-02-02 00:59:25.355307 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 00:59:25.355318 | orchestrator | Monday 02 February 2026 00:47:45 +0000 (0:00:00.982) 0:00:09.481 ******* 2026-02-02 00:59:25.355329 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.355340 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.355367 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.355379 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.355390 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.355400 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.355411 | orchestrator | 2026-02-02 00:59:25.355423 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 00:59:25.355434 | orchestrator | Monday 02 February 2026 00:47:46 +0000 (0:00:01.166) 0:00:10.647 ******* 2026-02-02 00:59:25.355445 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 00:59:25.355457 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 00:59:25.355468 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 00:59:25.355479 | orchestrator | 2026-02-02 00:59:25.355583 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 00:59:25.355596 | orchestrator | Monday 02 February 2026 00:47:47 +0000 (0:00:00.723) 0:00:11.370 ******* 2026-02-02 00:59:25.355607 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.355618 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.355658 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.355670 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.355698 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.355723 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.355734 | orchestrator | 2026-02-02 00:59:25.355745 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 00:59:25.355757 | orchestrator | Monday 02 February 2026 00:47:48 +0000 (0:00:01.468) 0:00:12.839 ******* 2026-02-02 00:59:25.355768 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 00:59:25.355779 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 00:59:25.355791 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 00:59:25.355802 | orchestrator | 2026-02-02 00:59:25.355813 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 00:59:25.355824 | orchestrator | Monday 02 February 2026 00:47:51 +0000 (0:00:03.345) 0:00:16.184 ******* 2026-02-02 00:59:25.355835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 00:59:25.355846 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 00:59:25.355858 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 00:59:25.355869 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.355880 | orchestrator | 2026-02-02 00:59:25.355891 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 00:59:25.355903 | orchestrator | Monday 02 February 2026 00:47:52 +0000 (0:00:00.667) 0:00:16.852 ******* 2026-02-02 00:59:25.355916 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.355931 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.355943 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.355967 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.355978 | orchestrator | 2026-02-02 00:59:25.355989 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 00:59:25.356000 | orchestrator | Monday 02 February 2026 00:47:53 +0000 (0:00:01.245) 0:00:18.097 ******* 2026-02-02 00:59:25.356021 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.356036 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.356048 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.356066 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.356155 | orchestrator | 2026-02-02 00:59:25.356167 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 00:59:25.356179 | orchestrator | Monday 02 February 2026 00:47:54 +0000 (0:00:00.223) 0:00:18.320 ******* 2026-02-02 00:59:25.356205 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 00:47:49.339485', 'end': '2026-02-02 00:47:49.600637', 'delta': '0:00:00.261152', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.356223 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 00:47:50.556900', 'end': '2026-02-02 00:47:50.802760', 'delta': '0:00:00.245860', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.356235 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 00:47:51.477867', 'end': '2026-02-02 00:47:51.692380', 'delta': '0:00:00.214513', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.356247 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.356258 | orchestrator | 2026-02-02 00:59:25.356269 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 00:59:25.356281 | orchestrator | Monday 02 February 2026 00:47:54 +0000 (0:00:00.409) 0:00:18.730 ******* 2026-02-02 00:59:25.356292 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.356303 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.356314 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.356325 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.356336 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.356347 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.356358 | orchestrator | 2026-02-02 00:59:25.356368 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 00:59:25.356378 | orchestrator | Monday 02 February 2026 00:47:55 +0000 (0:00:01.394) 0:00:20.125 ******* 2026-02-02 00:59:25.356387 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.356397 | orchestrator | 2026-02-02 00:59:25.356407 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 00:59:25.356422 | orchestrator | Monday 02 February 2026 00:47:56 +0000 (0:00:00.982) 0:00:21.107 ******* 2026-02-02 00:59:25.356432 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.356442 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.356452 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.356470 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.356480 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.356489 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.356499 | orchestrator | 2026-02-02 00:59:25.356509 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 00:59:25.356518 | orchestrator | Monday 02 February 2026 00:47:57 +0000 (0:00:01.010) 0:00:22.118 ******* 2026-02-02 00:59:25.356528 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.356538 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.356548 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.356557 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.356567 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.356577 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.356586 | orchestrator | 2026-02-02 00:59:25.356596 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 00:59:25.356606 | orchestrator | Monday 02 February 2026 00:47:59 +0000 (0:00:02.173) 0:00:24.291 ******* 2026-02-02 00:59:25.356616 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.356625 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.356635 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.356645 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.356655 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.356665 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.356674 | orchestrator | 2026-02-02 00:59:25.356684 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 00:59:25.356694 | orchestrator | Monday 02 February 2026 00:48:00 +0000 (0:00:00.987) 0:00:25.278 ******* 2026-02-02 00:59:25.356704 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.356713 | orchestrator | 2026-02-02 00:59:25.356723 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 00:59:25.356733 | orchestrator | Monday 02 February 2026 00:48:01 +0000 (0:00:00.117) 0:00:25.396 ******* 2026-02-02 00:59:25.356743 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.356753 | orchestrator | 2026-02-02 00:59:25.356763 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 00:59:25.356773 | orchestrator | Monday 02 February 2026 00:48:01 +0000 (0:00:00.318) 0:00:25.714 ******* 2026-02-02 00:59:25.356782 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.356792 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.356801 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.356811 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.356821 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.356831 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.356841 | orchestrator | 2026-02-02 00:59:25.356857 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 00:59:25.356868 | orchestrator | Monday 02 February 2026 00:48:02 +0000 (0:00:01.056) 0:00:26.771 ******* 2026-02-02 00:59:25.356878 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.356888 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.356898 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.356907 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.356917 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.356927 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.356936 | orchestrator | 2026-02-02 00:59:25.356946 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 00:59:25.356957 | orchestrator | Monday 02 February 2026 00:48:03 +0000 (0:00:01.274) 0:00:28.045 ******* 2026-02-02 00:59:25.356967 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.356976 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.356986 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.356996 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.357006 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.357016 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.357033 | orchestrator | 2026-02-02 00:59:25.357043 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 00:59:25.357053 | orchestrator | Monday 02 February 2026 00:48:04 +0000 (0:00:00.742) 0:00:28.788 ******* 2026-02-02 00:59:25.357063 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.357100 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.357111 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.357121 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.357130 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.357140 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.357150 | orchestrator | 2026-02-02 00:59:25.357161 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 00:59:25.357171 | orchestrator | Monday 02 February 2026 00:48:05 +0000 (0:00:01.495) 0:00:30.284 ******* 2026-02-02 00:59:25.357181 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.357190 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.357200 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.357210 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.357220 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.357229 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.357239 | orchestrator | 2026-02-02 00:59:25.357250 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 00:59:25.357259 | orchestrator | Monday 02 February 2026 00:48:06 +0000 (0:00:00.767) 0:00:31.051 ******* 2026-02-02 00:59:25.357270 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.357280 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.357289 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.357299 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.357309 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.357319 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.357329 | orchestrator | 2026-02-02 00:59:25.357339 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 00:59:25.357349 | orchestrator | Monday 02 February 2026 00:48:07 +0000 (0:00:00.838) 0:00:31.890 ******* 2026-02-02 00:59:25.357360 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.357369 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.357384 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.357395 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.357404 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.357414 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.357424 | orchestrator | 2026-02-02 00:59:25.357434 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 00:59:25.357444 | orchestrator | Monday 02 February 2026 00:48:08 +0000 (0:00:00.727) 0:00:32.617 ******* 2026-02-02 00:59:25.357454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851', 'scsi-SQEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part1', 'scsi-SQEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part14', 'scsi-SQEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part15', 'scsi-SQEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part16', 'scsi-SQEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.357592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.357605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357636 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.357646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7', 'scsi-SQEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part1', 'scsi-SQEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part14', 'scsi-SQEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part15', 'scsi-SQEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part16', 'scsi-SQEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.357817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.357838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c', 'scsi-SQEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part1', 'scsi-SQEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part14', 'scsi-SQEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part15', 'scsi-SQEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part16', 'scsi-SQEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.357880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.357900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91c179ef--578a--54fb--a2b0--5b892bd3ac18-osd--block--91c179ef--578a--54fb--a2b0--5b892bd3ac18', 'dm-uuid-LVM-CBmyVChEmESNLeBT1MkMSINSk2ajOcjEVW1F4EfsafZbwh6CUXur0lvhuPqTJPPb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91730114--ee0c--5e20--9378--f20099298830-osd--block--91730114--ee0c--5e20--9378--f20099298830', 'dm-uuid-LVM-cUnpQzpbDAyiRw22abVs1EKRXWL8W9zR4MbPrzayvu0R20HyrCa9xvCO30c5hMd1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.357993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358052 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.358066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--604951f0--1bde--54b3--957a--2369560b0fa2-osd--block--604951f0--1bde--54b3--957a--2369560b0fa2', 'dm-uuid-LVM-3ZXwImDyw4fF3NRZj3QiF1GeKyro3QPIB5j6nFCKbMCu9pskJdjnWtnHnR26AsdQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edd20676--fc89--5b2b--b977--99722e90cce2-osd--block--edd20676--fc89--5b2b--b977--99722e90cce2', 'dm-uuid-LVM-oNyUsA3TIQFQdmqZqwf2HNQmWcYixTjxAT4UNR5pMsPpjE904WGrtx3GjqJt36Nz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--91c179ef--578a--54fb--a2b0--5b892bd3ac18-osd--block--91c179ef--578a--54fb--a2b0--5b892bd3ac18'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6sQpnu-oeog-9uSQ-8irI-YO3i-03S8-oT4k1a', 'scsi-0QEMU_QEMU_HARDDISK_08cb0a0c-22c5-4be4-bae6-cc486e6cf4a8', 'scsi-SQEMU_QEMU_HARDDISK_08cb0a0c-22c5-4be4-bae6-cc486e6cf4a8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--91730114--ee0c--5e20--9378--f20099298830-osd--block--91730114--ee0c--5e20--9378--f20099298830'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XwcXUw-Uu1v-fj1R-vnAq-N5DG-f8Qb-00N7Lp', 'scsi-0QEMU_QEMU_HARDDISK_2791ba27-caf1-4c6d-bd50-3e0320bbaa42', 'scsi-SQEMU_QEMU_HARDDISK_2791ba27-caf1-4c6d-bd50-3e0320bbaa42'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0ea612a-524a-49e0-9350-b51de64b4b0f', 'scsi-SQEMU_QEMU_HARDDISK_b0ea612a-524a-49e0-9350-b51de64b4b0f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358309 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.358326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358368 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.358382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7-osd--block--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7', 'dm-uuid-LVM-OEBwCC3YuzLICYw8CHTrGZLG0LKYZkCfZwwIAaAKsinrIAxMVCTNbWKnsZs1YSLA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0f572543--3461--541d--9614--18cfec52b251-osd--block--0f572543--3461--541d--9614--18cfec52b251', 'dm-uuid-LVM-VwC4RRIV6z7NJSpHMy12KJpoxleDt2OKYXZRfTXCKg782JFNl5F2SUpzRNLA70fM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--604951f0--1bde--54b3--957a--2369560b0fa2-osd--block--604951f0--1bde--54b3--957a--2369560b0fa2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HKpUVV-X4dA-EDHp-ilRF-QPyn-Eq8n-30cnG2', 'scsi-0QEMU_QEMU_HARDDISK_8be21885-29f7-4026-87ce-cd032f624f70', 'scsi-SQEMU_QEMU_HARDDISK_8be21885-29f7-4026-87ce-cd032f624f70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--edd20676--fc89--5b2b--b977--99722e90cce2-osd--block--edd20676--fc89--5b2b--b977--99722e90cce2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OR8WAX-OQWJ-XEg9-Wwht-yTg9-CKMR-3T1I3f', 'scsi-0QEMU_QEMU_HARDDISK_78cf2400-96ef-4814-8ef8-9c5b7903f7b2', 'scsi-SQEMU_QEMU_HARDDISK_78cf2400-96ef-4814-8ef8-9c5b7903f7b2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2465f817-ac52-4990-8055-49becb307e2f', 'scsi-SQEMU_QEMU_HARDDISK_2465f817-ac52-4990-8055-49becb307e2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358534 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.358544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 00:59:25.358617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7-osd--block--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WGi4of-ix8g-g3UD-7qOZ-6j2X-fOzY-1PZkAt', 'scsi-0QEMU_QEMU_HARDDISK_09bfd8bd-6f2f-4d2c-8da9-081114b71f81', 'scsi-SQEMU_QEMU_HARDDISK_09bfd8bd-6f2f-4d2c-8da9-081114b71f81'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0f572543--3461--541d--9614--18cfec52b251-osd--block--0f572543--3461--541d--9614--18cfec52b251'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YhDmgn-v3yM-kiZc-JIhA-3oL5-HNY3-C0uZ5o', 'scsi-0QEMU_QEMU_HARDDISK_16c98517-e0bb-4d3e-881d-5a0c6479c324', 'scsi-SQEMU_QEMU_HARDDISK_16c98517-e0bb-4d3e-881d-5a0c6479c324'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6ca68c3-b66c-4649-954f-01a9ba336075', 'scsi-SQEMU_QEMU_HARDDISK_b6ca68c3-b66c-4649-954f-01a9ba336075'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 00:59:25.358691 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.358701 | orchestrator | 2026-02-02 00:59:25.358711 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 00:59:25.358721 | orchestrator | Monday 02 February 2026 00:48:09 +0000 (0:00:01.638) 0:00:34.256 ******* 2026-02-02 00:59:25.358731 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.358742 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.358763 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.358777 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.358788 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.358799 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.359552 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.359582 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.359687 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851', 'scsi-SQEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part1', 'scsi-SQEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part14', 'scsi-SQEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part15', 'scsi-SQEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part16', 'scsi-SQEMU_QEMU_HARDDISK_e82d7007-471e-45f2-a897-21e3387dc851-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360112 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360137 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.360149 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360160 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360180 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360197 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360208 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360218 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360260 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360271 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360295 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7', 'scsi-SQEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part1', 'scsi-SQEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part14', 'scsi-SQEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part15', 'scsi-SQEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part16', 'scsi-SQEMU_QEMU_HARDDISK_4323fcbf-cb44-453a-b59c-b231361fa0b7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360308 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360341 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360352 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.360369 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360380 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360401 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360416 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360427 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360949 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360972 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.360999 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c', 'scsi-SQEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part1', 'scsi-SQEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part14', 'scsi-SQEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part15', 'scsi-SQEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part16', 'scsi-SQEMU_QEMU_HARDDISK_f67d8d7d-cab6-4bbb-8cc4-b9f8341bcc0c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361012 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361261 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91c179ef--578a--54fb--a2b0--5b892bd3ac18-osd--block--91c179ef--578a--54fb--a2b0--5b892bd3ac18', 'dm-uuid-LVM-CBmyVChEmESNLeBT1MkMSINSk2ajOcjEVW1F4EfsafZbwh6CUXur0lvhuPqTJPPb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361291 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91730114--ee0c--5e20--9378--f20099298830-osd--block--91730114--ee0c--5e20--9378--f20099298830', 'dm-uuid-LVM-cUnpQzpbDAyiRw22abVs1EKRXWL8W9zR4MbPrzayvu0R20HyrCa9xvCO30c5hMd1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361302 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361318 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361329 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.361340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361350 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361457 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361467 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361482 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361573 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361600 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--604951f0--1bde--54b3--957a--2369560b0fa2-osd--block--604951f0--1bde--54b3--957a--2369560b0fa2', 'dm-uuid-LVM-3ZXwImDyw4fF3NRZj3QiF1GeKyro3QPIB5j6nFCKbMCu9pskJdjnWtnHnR26AsdQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361611 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--91c179ef--578a--54fb--a2b0--5b892bd3ac18-osd--block--91c179ef--578a--54fb--a2b0--5b892bd3ac18'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6sQpnu-oeog-9uSQ-8irI-YO3i-03S8-oT4k1a', 'scsi-0QEMU_QEMU_HARDDISK_08cb0a0c-22c5-4be4-bae6-cc486e6cf4a8', 'scsi-SQEMU_QEMU_HARDDISK_08cb0a0c-22c5-4be4-bae6-cc486e6cf4a8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361632 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edd20676--fc89--5b2b--b977--99722e90cce2-osd--block--edd20676--fc89--5b2b--b977--99722e90cce2', 'dm-uuid-LVM-oNyUsA3TIQFQdmqZqwf2HNQmWcYixTjxAT4UNR5pMsPpjE904WGrtx3GjqJt36Nz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--91730114--ee0c--5e20--9378--f20099298830-osd--block--91730114--ee0c--5e20--9378--f20099298830'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XwcXUw-Uu1v-fj1R-vnAq-N5DG-f8Qb-00N7Lp', 'scsi-0QEMU_QEMU_HARDDISK_2791ba27-caf1-4c6d-bd50-3e0320bbaa42', 'scsi-SQEMU_QEMU_HARDDISK_2791ba27-caf1-4c6d-bd50-3e0320bbaa42'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361724 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361752 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0ea612a-524a-49e0-9350-b51de64b4b0f', 'scsi-SQEMU_QEMU_HARDDISK_b0ea612a-524a-49e0-9350-b51de64b4b0f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361770 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361787 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361798 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361808 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.361819 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361902 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361917 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361928 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7-osd--block--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7', 'dm-uuid-LVM-OEBwCC3YuzLICYw8CHTrGZLG0LKYZkCfZwwIAaAKsinrIAxMVCTNbWKnsZs1YSLA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361944 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361955 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0f572543--3461--541d--9614--18cfec52b251-osd--block--0f572543--3461--541d--9614--18cfec52b251', 'dm-uuid-LVM-VwC4RRIV6z7NJSpHMy12KJpoxleDt2OKYXZRfTXCKg782JFNl5F2SUpzRNLA70fM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.361965 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362154 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362191 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362205 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362287 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--604951f0--1bde--54b3--957a--2369560b0fa2-osd--block--604951f0--1bde--54b3--957a--2369560b0fa2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HKpUVV-X4dA-EDHp-ilRF-QPyn-Eq8n-30cnG2', 'scsi-0QEMU_QEMU_HARDDISK_8be21885-29f7-4026-87ce-cd032f624f70', 'scsi-SQEMU_QEMU_HARDDISK_8be21885-29f7-4026-87ce-cd032f624f70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362313 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--edd20676--fc89--5b2b--b977--99722e90cce2-osd--block--edd20676--fc89--5b2b--b977--99722e90cce2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OR8WAX-OQWJ-XEg9-Wwht-yTg9-CKMR-3T1I3f', 'scsi-0QEMU_QEMU_HARDDISK_78cf2400-96ef-4814-8ef8-9c5b7903f7b2', 'scsi-SQEMU_QEMU_HARDDISK_78cf2400-96ef-4814-8ef8-9c5b7903f7b2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362324 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2465f817-ac52-4990-8055-49becb307e2f', 'scsi-SQEMU_QEMU_HARDDISK_2465f817-ac52-4990-8055-49becb307e2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362339 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362350 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362361 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.362371 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362468 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362484 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362495 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362510 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362583 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362608 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7-osd--block--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WGi4of-ix8g-g3UD-7qOZ-6j2X-fOzY-1PZkAt', 'scsi-0QEMU_QEMU_HARDDISK_09bfd8bd-6f2f-4d2c-8da9-081114b71f81', 'scsi-SQEMU_QEMU_HARDDISK_09bfd8bd-6f2f-4d2c-8da9-081114b71f81'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362624 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0f572543--3461--541d--9614--18cfec52b251-osd--block--0f572543--3461--541d--9614--18cfec52b251'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YhDmgn-v3yM-kiZc-JIhA-3oL5-HNY3-C0uZ5o', 'scsi-0QEMU_QEMU_HARDDISK_16c98517-e0bb-4d3e-881d-5a0c6479c324', 'scsi-SQEMU_QEMU_HARDDISK_16c98517-e0bb-4d3e-881d-5a0c6479c324'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362635 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6ca68c3-b66c-4649-954f-01a9ba336075', 'scsi-SQEMU_QEMU_HARDDISK_b6ca68c3-b66c-4649-954f-01a9ba336075'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362653 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 00:59:25.362664 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.362674 | orchestrator | 2026-02-02 00:59:25.362685 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 00:59:25.362697 | orchestrator | Monday 02 February 2026 00:48:11 +0000 (0:00:01.879) 0:00:36.135 ******* 2026-02-02 00:59:25.362770 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.362785 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.362816 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.362826 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.362837 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.362847 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.362857 | orchestrator | 2026-02-02 00:59:25.362867 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 00:59:25.362878 | orchestrator | Monday 02 February 2026 00:48:13 +0000 (0:00:01.922) 0:00:38.058 ******* 2026-02-02 00:59:25.362888 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.362898 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.362908 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.362918 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.362928 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.362938 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.362948 | orchestrator | 2026-02-02 00:59:25.362958 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 00:59:25.362969 | orchestrator | Monday 02 February 2026 00:48:15 +0000 (0:00:01.291) 0:00:39.350 ******* 2026-02-02 00:59:25.362979 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.362989 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.362999 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.363009 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.363019 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.363029 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.363039 | orchestrator | 2026-02-02 00:59:25.363049 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 00:59:25.363099 | orchestrator | Monday 02 February 2026 00:48:16 +0000 (0:00:01.103) 0:00:40.453 ******* 2026-02-02 00:59:25.363113 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.363122 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.363132 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.363142 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.363151 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.363161 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.363171 | orchestrator | 2026-02-02 00:59:25.363181 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 00:59:25.363190 | orchestrator | Monday 02 February 2026 00:48:17 +0000 (0:00:00.940) 0:00:41.393 ******* 2026-02-02 00:59:25.363200 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.363210 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.363220 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.363229 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.363239 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.363248 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.363275 | orchestrator | 2026-02-02 00:59:25.363286 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 00:59:25.363296 | orchestrator | Monday 02 February 2026 00:48:18 +0000 (0:00:01.502) 0:00:42.896 ******* 2026-02-02 00:59:25.363306 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.363316 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.363325 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.363335 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.363345 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.363354 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.363364 | orchestrator | 2026-02-02 00:59:25.363374 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 00:59:25.363394 | orchestrator | Monday 02 February 2026 00:48:19 +0000 (0:00:01.038) 0:00:43.935 ******* 2026-02-02 00:59:25.363406 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 00:59:25.363417 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-02 00:59:25.363429 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-02 00:59:25.363441 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-02 00:59:25.363452 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 00:59:25.363463 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-02 00:59:25.363474 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-02 00:59:25.363486 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-02 00:59:25.363497 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-02 00:59:25.363508 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-02 00:59:25.363520 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-02 00:59:25.363531 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-02 00:59:25.363542 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 00:59:25.363554 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-02 00:59:25.363565 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-02 00:59:25.363576 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-02 00:59:25.363588 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-02 00:59:25.363599 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-02 00:59:25.363608 | orchestrator | 2026-02-02 00:59:25.363618 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 00:59:25.363628 | orchestrator | Monday 02 February 2026 00:48:22 +0000 (0:00:03.161) 0:00:47.098 ******* 2026-02-02 00:59:25.363638 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 00:59:25.363648 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 00:59:25.363657 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 00:59:25.363667 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-02 00:59:25.363676 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-02 00:59:25.363686 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-02 00:59:25.363696 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.363705 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-02 00:59:25.363715 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.363725 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-02 00:59:25.363769 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-02 00:59:25.363781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 00:59:25.363791 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 00:59:25.363801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 00:59:25.363811 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.363820 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-02 00:59:25.363836 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-02 00:59:25.363846 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-02 00:59:25.363856 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.363866 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.363875 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-02 00:59:25.363885 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-02 00:59:25.363894 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-02 00:59:25.363904 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.363914 | orchestrator | 2026-02-02 00:59:25.363924 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 00:59:25.363934 | orchestrator | Monday 02 February 2026 00:48:23 +0000 (0:00:00.724) 0:00:47.823 ******* 2026-02-02 00:59:25.363943 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.363953 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.363963 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.363974 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.363983 | orchestrator | 2026-02-02 00:59:25.363993 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 00:59:25.364005 | orchestrator | Monday 02 February 2026 00:48:24 +0000 (0:00:01.088) 0:00:48.911 ******* 2026-02-02 00:59:25.364015 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.364024 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.364034 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.364044 | orchestrator | 2026-02-02 00:59:25.364054 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 00:59:25.364064 | orchestrator | Monday 02 February 2026 00:48:25 +0000 (0:00:00.463) 0:00:49.375 ******* 2026-02-02 00:59:25.364131 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.364142 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.364157 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.364175 | orchestrator | 2026-02-02 00:59:25.364190 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 00:59:25.364203 | orchestrator | Monday 02 February 2026 00:48:25 +0000 (0:00:00.676) 0:00:50.052 ******* 2026-02-02 00:59:25.364218 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.364233 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.364246 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.364260 | orchestrator | 2026-02-02 00:59:25.364274 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 00:59:25.364288 | orchestrator | Monday 02 February 2026 00:48:26 +0000 (0:00:00.425) 0:00:50.477 ******* 2026-02-02 00:59:25.364296 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.364304 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.364312 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.364320 | orchestrator | 2026-02-02 00:59:25.364329 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 00:59:25.364342 | orchestrator | Monday 02 February 2026 00:48:26 +0000 (0:00:00.827) 0:00:51.304 ******* 2026-02-02 00:59:25.364355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 00:59:25.364368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 00:59:25.364381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 00:59:25.364393 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.364406 | orchestrator | 2026-02-02 00:59:25.364418 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 00:59:25.364430 | orchestrator | Monday 02 February 2026 00:48:27 +0000 (0:00:00.609) 0:00:51.914 ******* 2026-02-02 00:59:25.364442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 00:59:25.364454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 00:59:25.364475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 00:59:25.364489 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.364503 | orchestrator | 2026-02-02 00:59:25.364516 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 00:59:25.364528 | orchestrator | Monday 02 February 2026 00:48:28 +0000 (0:00:00.635) 0:00:52.549 ******* 2026-02-02 00:59:25.364536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 00:59:25.364544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 00:59:25.364552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 00:59:25.364560 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.364568 | orchestrator | 2026-02-02 00:59:25.364577 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 00:59:25.364585 | orchestrator | Monday 02 February 2026 00:48:28 +0000 (0:00:00.415) 0:00:52.965 ******* 2026-02-02 00:59:25.364593 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.364601 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.364609 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.364618 | orchestrator | 2026-02-02 00:59:25.364626 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 00:59:25.364634 | orchestrator | Monday 02 February 2026 00:48:29 +0000 (0:00:00.379) 0:00:53.344 ******* 2026-02-02 00:59:25.364642 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-02 00:59:25.364650 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 00:59:25.364658 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-02 00:59:25.364666 | orchestrator | 2026-02-02 00:59:25.364711 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 00:59:25.364726 | orchestrator | Monday 02 February 2026 00:48:30 +0000 (0:00:01.250) 0:00:54.595 ******* 2026-02-02 00:59:25.364738 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 00:59:25.364751 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 00:59:25.364764 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 00:59:25.364777 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 00:59:25.364791 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 00:59:25.364805 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 00:59:25.364819 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 00:59:25.364832 | orchestrator | 2026-02-02 00:59:25.364844 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 00:59:25.364858 | orchestrator | Monday 02 February 2026 00:48:31 +0000 (0:00:01.400) 0:00:55.995 ******* 2026-02-02 00:59:25.364873 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 00:59:25.364888 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 00:59:25.364901 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 00:59:25.364914 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 00:59:25.364926 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 00:59:25.364938 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 00:59:25.364951 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 00:59:25.364963 | orchestrator | 2026-02-02 00:59:25.364976 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 00:59:25.364988 | orchestrator | Monday 02 February 2026 00:48:33 +0000 (0:00:02.149) 0:00:58.145 ******* 2026-02-02 00:59:25.365002 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.365028 | orchestrator | 2026-02-02 00:59:25.365041 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 00:59:25.365055 | orchestrator | Monday 02 February 2026 00:48:35 +0000 (0:00:01.285) 0:00:59.430 ******* 2026-02-02 00:59:25.365065 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.365104 | orchestrator | 2026-02-02 00:59:25.365124 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 00:59:25.365137 | orchestrator | Monday 02 February 2026 00:48:36 +0000 (0:00:01.654) 0:01:01.084 ******* 2026-02-02 00:59:25.365159 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.365174 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.365185 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.365197 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.365209 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.365220 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.365231 | orchestrator | 2026-02-02 00:59:25.365243 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 00:59:25.365254 | orchestrator | Monday 02 February 2026 00:48:37 +0000 (0:00:01.182) 0:01:02.267 ******* 2026-02-02 00:59:25.365265 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.365277 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.365291 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.365304 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.365318 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.365332 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.365345 | orchestrator | 2026-02-02 00:59:25.365358 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 00:59:25.365370 | orchestrator | Monday 02 February 2026 00:48:39 +0000 (0:00:01.251) 0:01:03.518 ******* 2026-02-02 00:59:25.365379 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.365387 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.365395 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.365403 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.365411 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.365418 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.365426 | orchestrator | 2026-02-02 00:59:25.365434 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 00:59:25.365442 | orchestrator | Monday 02 February 2026 00:48:40 +0000 (0:00:01.587) 0:01:05.106 ******* 2026-02-02 00:59:25.365450 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.365458 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.365466 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.365474 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.365482 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.365490 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.365498 | orchestrator | 2026-02-02 00:59:25.365506 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 00:59:25.365515 | orchestrator | Monday 02 February 2026 00:48:42 +0000 (0:00:01.613) 0:01:06.719 ******* 2026-02-02 00:59:25.365524 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.365538 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.365550 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.365562 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.365574 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.365587 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.365600 | orchestrator | 2026-02-02 00:59:25.365613 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 00:59:25.365684 | orchestrator | Monday 02 February 2026 00:48:43 +0000 (0:00:01.333) 0:01:08.053 ******* 2026-02-02 00:59:25.365697 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.365721 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.365733 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.365745 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.365756 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.365767 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.365779 | orchestrator | 2026-02-02 00:59:25.365791 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 00:59:25.365803 | orchestrator | Monday 02 February 2026 00:48:45 +0000 (0:00:01.386) 0:01:09.439 ******* 2026-02-02 00:59:25.365816 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.365830 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.365844 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.365856 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.365870 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.365879 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.365888 | orchestrator | 2026-02-02 00:59:25.365896 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 00:59:25.365904 | orchestrator | Monday 02 February 2026 00:48:46 +0000 (0:00:00.901) 0:01:10.341 ******* 2026-02-02 00:59:25.365915 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.365932 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.365950 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.365964 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.365976 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.365989 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.366001 | orchestrator | 2026-02-02 00:59:25.366013 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 00:59:25.366067 | orchestrator | Monday 02 February 2026 00:48:48 +0000 (0:00:02.519) 0:01:12.860 ******* 2026-02-02 00:59:25.366148 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.366161 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.366174 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.366184 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.366197 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.366210 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.366223 | orchestrator | 2026-02-02 00:59:25.366236 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 00:59:25.366249 | orchestrator | Monday 02 February 2026 00:48:50 +0000 (0:00:01.729) 0:01:14.589 ******* 2026-02-02 00:59:25.366263 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.366278 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.366291 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.366305 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.366319 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.366332 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.366345 | orchestrator | 2026-02-02 00:59:25.366354 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 00:59:25.366363 | orchestrator | Monday 02 February 2026 00:48:51 +0000 (0:00:01.341) 0:01:15.931 ******* 2026-02-02 00:59:25.366371 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.366379 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.366387 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.366395 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.366403 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.366411 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.366419 | orchestrator | 2026-02-02 00:59:25.366435 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 00:59:25.366443 | orchestrator | Monday 02 February 2026 00:48:52 +0000 (0:00:01.224) 0:01:17.156 ******* 2026-02-02 00:59:25.366454 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.366467 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.366481 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.366500 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.366512 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.366537 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.366549 | orchestrator | 2026-02-02 00:59:25.366560 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 00:59:25.366570 | orchestrator | Monday 02 February 2026 00:48:54 +0000 (0:00:01.720) 0:01:18.876 ******* 2026-02-02 00:59:25.366582 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.366593 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.366604 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.366616 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.366624 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.366630 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.366637 | orchestrator | 2026-02-02 00:59:25.366644 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 00:59:25.366651 | orchestrator | Monday 02 February 2026 00:48:55 +0000 (0:00:00.875) 0:01:19.752 ******* 2026-02-02 00:59:25.366658 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.366664 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.366671 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.366677 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.366684 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.366691 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.366697 | orchestrator | 2026-02-02 00:59:25.366704 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 00:59:25.366711 | orchestrator | Monday 02 February 2026 00:48:57 +0000 (0:00:01.717) 0:01:21.470 ******* 2026-02-02 00:59:25.366718 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.366725 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.366731 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.366738 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.366744 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.366768 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.366775 | orchestrator | 2026-02-02 00:59:25.366782 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 00:59:25.366789 | orchestrator | Monday 02 February 2026 00:48:58 +0000 (0:00:01.509) 0:01:22.979 ******* 2026-02-02 00:59:25.366796 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.366803 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.366809 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.366816 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.366822 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.366829 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.366836 | orchestrator | 2026-02-02 00:59:25.366915 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 00:59:25.366931 | orchestrator | Monday 02 February 2026 00:49:00 +0000 (0:00:01.554) 0:01:24.533 ******* 2026-02-02 00:59:25.366943 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.366954 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.366977 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.366987 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.366997 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.367008 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.367032 | orchestrator | 2026-02-02 00:59:25.367043 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 00:59:25.367053 | orchestrator | Monday 02 February 2026 00:49:01 +0000 (0:00:01.099) 0:01:25.633 ******* 2026-02-02 00:59:25.367064 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.367092 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.367103 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.367114 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.367124 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.367134 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.367144 | orchestrator | 2026-02-02 00:59:25.367155 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 00:59:25.367166 | orchestrator | Monday 02 February 2026 00:49:02 +0000 (0:00:01.144) 0:01:26.778 ******* 2026-02-02 00:59:25.367189 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.367201 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.367212 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.367238 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.367250 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.367260 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.367268 | orchestrator | 2026-02-02 00:59:25.367274 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 00:59:25.367281 | orchestrator | Monday 02 February 2026 00:49:03 +0000 (0:00:01.264) 0:01:28.042 ******* 2026-02-02 00:59:25.367288 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.367295 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.367302 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.367309 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.367315 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.367322 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.367329 | orchestrator | 2026-02-02 00:59:25.367336 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 00:59:25.367342 | orchestrator | Monday 02 February 2026 00:49:05 +0000 (0:00:01.653) 0:01:29.695 ******* 2026-02-02 00:59:25.367349 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.367356 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.367363 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.367369 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.367376 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.367383 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.367390 | orchestrator | 2026-02-02 00:59:25.367407 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 00:59:25.367414 | orchestrator | Monday 02 February 2026 00:49:07 +0000 (0:00:01.945) 0:01:31.641 ******* 2026-02-02 00:59:25.367422 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.367429 | orchestrator | 2026-02-02 00:59:25.367442 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 00:59:25.367449 | orchestrator | Monday 02 February 2026 00:49:08 +0000 (0:00:01.102) 0:01:32.743 ******* 2026-02-02 00:59:25.367456 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.367463 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.367470 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.367477 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.367483 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.367490 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.367497 | orchestrator | 2026-02-02 00:59:25.367504 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 00:59:25.367511 | orchestrator | Monday 02 February 2026 00:49:09 +0000 (0:00:00.707) 0:01:33.450 ******* 2026-02-02 00:59:25.367518 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.367524 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.367531 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.367538 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.367545 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.367552 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.367559 | orchestrator | 2026-02-02 00:59:25.367566 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 00:59:25.367573 | orchestrator | Monday 02 February 2026 00:49:09 +0000 (0:00:00.689) 0:01:34.140 ******* 2026-02-02 00:59:25.367579 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 00:59:25.367586 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 00:59:25.367593 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 00:59:25.367600 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 00:59:25.367622 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 00:59:25.367629 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 00:59:25.367636 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 00:59:25.367643 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 00:59:25.367650 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 00:59:25.367656 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 00:59:25.367663 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 00:59:25.367706 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 00:59:25.367714 | orchestrator | 2026-02-02 00:59:25.367721 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 00:59:25.367728 | orchestrator | Monday 02 February 2026 00:49:11 +0000 (0:00:01.318) 0:01:35.459 ******* 2026-02-02 00:59:25.367735 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.367742 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.367749 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.367756 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.367763 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.367769 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.367776 | orchestrator | 2026-02-02 00:59:25.367783 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 00:59:25.367790 | orchestrator | Monday 02 February 2026 00:49:12 +0000 (0:00:01.203) 0:01:36.663 ******* 2026-02-02 00:59:25.367797 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.367804 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.367810 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.367817 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.367824 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.367830 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.367837 | orchestrator | 2026-02-02 00:59:25.367844 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 00:59:25.367851 | orchestrator | Monday 02 February 2026 00:49:12 +0000 (0:00:00.632) 0:01:37.295 ******* 2026-02-02 00:59:25.367858 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.367864 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.367871 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.367878 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.367885 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.367891 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.367898 | orchestrator | 2026-02-02 00:59:25.367915 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 00:59:25.367923 | orchestrator | Monday 02 February 2026 00:49:13 +0000 (0:00:00.699) 0:01:37.995 ******* 2026-02-02 00:59:25.367930 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.367936 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.367943 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.367950 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.367957 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.367964 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.367971 | orchestrator | 2026-02-02 00:59:25.367978 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 00:59:25.367985 | orchestrator | Monday 02 February 2026 00:49:14 +0000 (0:00:00.549) 0:01:38.544 ******* 2026-02-02 00:59:25.367992 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.368005 | orchestrator | 2026-02-02 00:59:25.368012 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 00:59:25.368019 | orchestrator | Monday 02 February 2026 00:49:15 +0000 (0:00:01.048) 0:01:39.593 ******* 2026-02-02 00:59:25.368026 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.368037 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.368044 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.368051 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.368058 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.368065 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.368093 | orchestrator | 2026-02-02 00:59:25.368104 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 00:59:25.368116 | orchestrator | Monday 02 February 2026 00:50:19 +0000 (0:01:04.465) 0:02:44.059 ******* 2026-02-02 00:59:25.368127 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 00:59:25.368139 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 00:59:25.368149 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 00:59:25.368161 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.368169 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 00:59:25.368176 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 00:59:25.368183 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 00:59:25.368190 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.368202 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 00:59:25.368213 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 00:59:25.368225 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 00:59:25.368235 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.368246 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 00:59:25.368257 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 00:59:25.368269 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 00:59:25.368280 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.368291 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 00:59:25.368302 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 00:59:25.368314 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 00:59:25.368325 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.368337 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 00:59:25.368388 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 00:59:25.368402 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 00:59:25.368413 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.368424 | orchestrator | 2026-02-02 00:59:25.368434 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 00:59:25.368445 | orchestrator | Monday 02 February 2026 00:50:20 +0000 (0:00:00.659) 0:02:44.719 ******* 2026-02-02 00:59:25.368457 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.368468 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.368480 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.368489 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.368495 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.368502 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.368509 | orchestrator | 2026-02-02 00:59:25.368516 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 00:59:25.368532 | orchestrator | Monday 02 February 2026 00:50:21 +0000 (0:00:00.811) 0:02:45.530 ******* 2026-02-02 00:59:25.368539 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.368546 | orchestrator | 2026-02-02 00:59:25.368553 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 00:59:25.368560 | orchestrator | Monday 02 February 2026 00:50:21 +0000 (0:00:00.137) 0:02:45.667 ******* 2026-02-02 00:59:25.368566 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.368573 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.368580 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.368587 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.368594 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.368601 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.368607 | orchestrator | 2026-02-02 00:59:25.368614 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 00:59:25.368621 | orchestrator | Monday 02 February 2026 00:50:22 +0000 (0:00:00.716) 0:02:46.384 ******* 2026-02-02 00:59:25.368628 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.368635 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.368642 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.368648 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.368655 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.368662 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.368670 | orchestrator | 2026-02-02 00:59:25.368682 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 00:59:25.368692 | orchestrator | Monday 02 February 2026 00:50:22 +0000 (0:00:00.869) 0:02:47.254 ******* 2026-02-02 00:59:25.368702 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.368712 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.368723 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.368734 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.368745 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.368756 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.368767 | orchestrator | 2026-02-02 00:59:25.368778 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 00:59:25.368789 | orchestrator | Monday 02 February 2026 00:50:23 +0000 (0:00:00.944) 0:02:48.199 ******* 2026-02-02 00:59:25.368801 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.368828 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.368836 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.368842 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.368860 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.368867 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.368873 | orchestrator | 2026-02-02 00:59:25.368880 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 00:59:25.368887 | orchestrator | Monday 02 February 2026 00:50:26 +0000 (0:00:02.853) 0:02:51.052 ******* 2026-02-02 00:59:25.368894 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.368900 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.368907 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.368914 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.368920 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.368927 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.368934 | orchestrator | 2026-02-02 00:59:25.368941 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 00:59:25.368947 | orchestrator | Monday 02 February 2026 00:50:27 +0000 (0:00:00.820) 0:02:51.873 ******* 2026-02-02 00:59:25.368955 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.368965 | orchestrator | 2026-02-02 00:59:25.368975 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 00:59:25.368985 | orchestrator | Monday 02 February 2026 00:50:29 +0000 (0:00:01.708) 0:02:53.581 ******* 2026-02-02 00:59:25.368992 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.369004 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.369011 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.369018 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.369025 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.369031 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.369056 | orchestrator | 2026-02-02 00:59:25.369120 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 00:59:25.369136 | orchestrator | Monday 02 February 2026 00:50:30 +0000 (0:00:01.394) 0:02:54.975 ******* 2026-02-02 00:59:25.369161 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.369170 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.369177 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.369183 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.369190 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.369197 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.369204 | orchestrator | 2026-02-02 00:59:25.369214 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 00:59:25.369225 | orchestrator | Monday 02 February 2026 00:50:31 +0000 (0:00:01.015) 0:02:55.991 ******* 2026-02-02 00:59:25.369237 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.369248 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.369259 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.369269 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.369279 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.369335 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.369348 | orchestrator | 2026-02-02 00:59:25.369359 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 00:59:25.369369 | orchestrator | Monday 02 February 2026 00:50:32 +0000 (0:00:00.792) 0:02:56.784 ******* 2026-02-02 00:59:25.369381 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.369392 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.369404 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.369414 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.369426 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.369438 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.369449 | orchestrator | 2026-02-02 00:59:25.369459 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 00:59:25.369470 | orchestrator | Monday 02 February 2026 00:50:33 +0000 (0:00:01.339) 0:02:58.123 ******* 2026-02-02 00:59:25.369479 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.369486 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.369492 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.369510 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.369516 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.369523 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.369529 | orchestrator | 2026-02-02 00:59:25.369535 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 00:59:25.369541 | orchestrator | Monday 02 February 2026 00:50:35 +0000 (0:00:01.262) 0:02:59.385 ******* 2026-02-02 00:59:25.369548 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.369554 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.369560 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.369566 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.369573 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.369579 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.369585 | orchestrator | 2026-02-02 00:59:25.369591 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 00:59:25.369598 | orchestrator | Monday 02 February 2026 00:50:36 +0000 (0:00:01.238) 0:03:00.624 ******* 2026-02-02 00:59:25.369604 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.369610 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.369616 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.369623 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.369636 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.369643 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.369649 | orchestrator | 2026-02-02 00:59:25.369655 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 00:59:25.369662 | orchestrator | Monday 02 February 2026 00:50:37 +0000 (0:00:00.758) 0:03:01.382 ******* 2026-02-02 00:59:25.369668 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.369674 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.369680 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.369687 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.369693 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.369699 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.369705 | orchestrator | 2026-02-02 00:59:25.369720 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 00:59:25.369727 | orchestrator | Monday 02 February 2026 00:50:38 +0000 (0:00:01.445) 0:03:02.827 ******* 2026-02-02 00:59:25.369733 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.369745 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.369751 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.369758 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.369764 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.369770 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.369777 | orchestrator | 2026-02-02 00:59:25.369783 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 00:59:25.369790 | orchestrator | Monday 02 February 2026 00:50:40 +0000 (0:00:01.519) 0:03:04.347 ******* 2026-02-02 00:59:25.369796 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.369803 | orchestrator | 2026-02-02 00:59:25.369810 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 00:59:25.369816 | orchestrator | Monday 02 February 2026 00:50:41 +0000 (0:00:01.201) 0:03:05.548 ******* 2026-02-02 00:59:25.369823 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-02 00:59:25.369830 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-02 00:59:25.369836 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-02 00:59:25.369842 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-02 00:59:25.369849 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-02 00:59:25.369855 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-02 00:59:25.369861 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-02 00:59:25.369868 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-02 00:59:25.369874 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-02 00:59:25.369880 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-02 00:59:25.369887 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-02 00:59:25.369893 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-02 00:59:25.369899 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-02 00:59:25.369906 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-02 00:59:25.369912 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-02 00:59:25.369919 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-02 00:59:25.369925 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-02 00:59:25.369932 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-02 00:59:25.369938 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-02 00:59:25.369945 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-02 00:59:25.369979 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-02 00:59:25.369987 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-02 00:59:25.369998 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-02 00:59:25.370004 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-02 00:59:25.370010 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-02 00:59:25.370042 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-02 00:59:25.370051 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-02 00:59:25.370057 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-02 00:59:25.370063 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-02 00:59:25.370087 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-02 00:59:25.370096 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-02 00:59:25.370102 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-02 00:59:25.370109 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-02 00:59:25.370115 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-02 00:59:25.370121 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-02 00:59:25.370128 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 00:59:25.370134 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-02 00:59:25.370141 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-02 00:59:25.370147 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-02 00:59:25.370153 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-02 00:59:25.370160 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 00:59:25.370166 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-02 00:59:25.370172 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-02 00:59:25.370179 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-02 00:59:25.370185 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 00:59:25.370191 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-02 00:59:25.370197 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-02 00:59:25.370203 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-02 00:59:25.370209 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 00:59:25.370216 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 00:59:25.370222 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-02 00:59:25.370228 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 00:59:25.370238 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-02 00:59:25.370245 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-02 00:59:25.370251 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 00:59:25.370257 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 00:59:25.370263 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 00:59:25.370269 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 00:59:25.370276 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 00:59:25.370282 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 00:59:25.370288 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 00:59:25.370295 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 00:59:25.370301 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 00:59:25.370313 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 00:59:25.370319 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 00:59:25.370326 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 00:59:25.370332 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-02 00:59:25.370338 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 00:59:25.370344 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 00:59:25.370350 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 00:59:25.370357 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 00:59:25.370363 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 00:59:25.370369 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-02 00:59:25.370375 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 00:59:25.370381 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 00:59:25.370388 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 00:59:25.370394 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 00:59:25.370400 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 00:59:25.370434 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 00:59:25.370446 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 00:59:25.370456 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 00:59:25.370466 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 00:59:25.370475 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 00:59:25.370485 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-02 00:59:25.370495 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 00:59:25.370505 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-02 00:59:25.370515 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 00:59:25.370525 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 00:59:25.370534 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-02 00:59:25.370544 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-02 00:59:25.370554 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-02 00:59:25.370564 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-02 00:59:25.370575 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-02 00:59:25.370585 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-02 00:59:25.370595 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-02 00:59:25.370605 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-02 00:59:25.370616 | orchestrator | 2026-02-02 00:59:25.370626 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 00:59:25.370637 | orchestrator | Monday 02 February 2026 00:50:48 +0000 (0:00:07.106) 0:03:12.655 ******* 2026-02-02 00:59:25.370647 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.370658 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.370666 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.370673 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.370680 | orchestrator | 2026-02-02 00:59:25.370686 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-02 00:59:25.370693 | orchestrator | Monday 02 February 2026 00:50:50 +0000 (0:00:01.674) 0:03:14.329 ******* 2026-02-02 00:59:25.370706 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.370713 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.370724 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.370731 | orchestrator | 2026-02-02 00:59:25.370737 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-02 00:59:25.370743 | orchestrator | Monday 02 February 2026 00:50:51 +0000 (0:00:01.007) 0:03:15.336 ******* 2026-02-02 00:59:25.370750 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.370761 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.370771 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.370782 | orchestrator | 2026-02-02 00:59:25.370792 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 00:59:25.370804 | orchestrator | Monday 02 February 2026 00:50:52 +0000 (0:00:01.506) 0:03:16.843 ******* 2026-02-02 00:59:25.370814 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.370825 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.370835 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.370846 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.370857 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.370868 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.370878 | orchestrator | 2026-02-02 00:59:25.370887 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 00:59:25.370894 | orchestrator | Monday 02 February 2026 00:50:53 +0000 (0:00:00.736) 0:03:17.580 ******* 2026-02-02 00:59:25.370900 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.370906 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.370913 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.370919 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.370925 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.370931 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.370938 | orchestrator | 2026-02-02 00:59:25.370944 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 00:59:25.370950 | orchestrator | Monday 02 February 2026 00:50:54 +0000 (0:00:01.194) 0:03:18.774 ******* 2026-02-02 00:59:25.370957 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.370963 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.370970 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.370976 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.370982 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.370988 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.370995 | orchestrator | 2026-02-02 00:59:25.371001 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 00:59:25.371008 | orchestrator | Monday 02 February 2026 00:50:55 +0000 (0:00:00.746) 0:03:19.520 ******* 2026-02-02 00:59:25.371060 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.371119 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.371127 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.371133 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.371140 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.371146 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.371152 | orchestrator | 2026-02-02 00:59:25.371159 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 00:59:25.371165 | orchestrator | Monday 02 February 2026 00:50:56 +0000 (0:00:00.929) 0:03:20.450 ******* 2026-02-02 00:59:25.371183 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.371189 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.371196 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.371202 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.371208 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.371214 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.371221 | orchestrator | 2026-02-02 00:59:25.371227 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 00:59:25.371234 | orchestrator | Monday 02 February 2026 00:50:56 +0000 (0:00:00.786) 0:03:21.237 ******* 2026-02-02 00:59:25.371240 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.371246 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.371252 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.371259 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.371265 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.371271 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.371278 | orchestrator | 2026-02-02 00:59:25.371284 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 00:59:25.371290 | orchestrator | Monday 02 February 2026 00:50:58 +0000 (0:00:01.266) 0:03:22.503 ******* 2026-02-02 00:59:25.371296 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.371303 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.371309 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.371315 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.371322 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.371328 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.371334 | orchestrator | 2026-02-02 00:59:25.371341 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 00:59:25.371347 | orchestrator | Monday 02 February 2026 00:50:59 +0000 (0:00:00.965) 0:03:23.469 ******* 2026-02-02 00:59:25.371353 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.371360 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.371366 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.371372 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.371378 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.371384 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.371391 | orchestrator | 2026-02-02 00:59:25.371397 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 00:59:25.371403 | orchestrator | Monday 02 February 2026 00:51:00 +0000 (0:00:00.863) 0:03:24.332 ******* 2026-02-02 00:59:25.371409 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.371416 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.371422 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.371428 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.371440 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.371446 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.371452 | orchestrator | 2026-02-02 00:59:25.371459 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 00:59:25.371465 | orchestrator | Monday 02 February 2026 00:51:04 +0000 (0:00:04.668) 0:03:29.001 ******* 2026-02-02 00:59:25.371472 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.371478 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.371484 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.371491 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.371497 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.371503 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.371509 | orchestrator | 2026-02-02 00:59:25.371516 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 00:59:25.371522 | orchestrator | Monday 02 February 2026 00:51:05 +0000 (0:00:00.775) 0:03:29.776 ******* 2026-02-02 00:59:25.371529 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.371540 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.371547 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.371553 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.371559 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.371565 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.371572 | orchestrator | 2026-02-02 00:59:25.371578 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 00:59:25.371584 | orchestrator | Monday 02 February 2026 00:51:06 +0000 (0:00:01.327) 0:03:31.104 ******* 2026-02-02 00:59:25.371591 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.371597 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.371603 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.371609 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.371615 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.371621 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.371628 | orchestrator | 2026-02-02 00:59:25.371634 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 00:59:25.371643 | orchestrator | Monday 02 February 2026 00:51:07 +0000 (0:00:00.682) 0:03:31.787 ******* 2026-02-02 00:59:25.371652 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.371661 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.371671 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.371679 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.371688 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.371697 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.371706 | orchestrator | 2026-02-02 00:59:25.371748 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 00:59:25.371758 | orchestrator | Monday 02 February 2026 00:51:08 +0000 (0:00:01.114) 0:03:32.902 ******* 2026-02-02 00:59:25.371767 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.371776 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.371787 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-02 00:59:25.371799 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-02 00:59:25.371809 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.371818 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-02 00:59:25.371824 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-02 00:59:25.371829 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.371835 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.371841 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-02 00:59:25.371860 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-02 00:59:25.371869 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.371877 | orchestrator | 2026-02-02 00:59:25.371886 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 00:59:25.371896 | orchestrator | Monday 02 February 2026 00:51:09 +0000 (0:00:01.171) 0:03:34.074 ******* 2026-02-02 00:59:25.371905 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.371914 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.371923 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.371933 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.371940 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.371945 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.371951 | orchestrator | 2026-02-02 00:59:25.371956 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 00:59:25.371962 | orchestrator | Monday 02 February 2026 00:51:10 +0000 (0:00:01.204) 0:03:35.279 ******* 2026-02-02 00:59:25.371967 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.371973 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.371978 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.371984 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.371989 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.371995 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.372000 | orchestrator | 2026-02-02 00:59:25.372006 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 00:59:25.372012 | orchestrator | Monday 02 February 2026 00:51:11 +0000 (0:00:00.627) 0:03:35.907 ******* 2026-02-02 00:59:25.372017 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.372023 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.372028 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.372033 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.372039 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.372045 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.372050 | orchestrator | 2026-02-02 00:59:25.372056 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 00:59:25.372061 | orchestrator | Monday 02 February 2026 00:51:12 +0000 (0:00:01.234) 0:03:37.142 ******* 2026-02-02 00:59:25.372067 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.372091 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.372096 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.372102 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.372107 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.372113 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.372120 | orchestrator | 2026-02-02 00:59:25.372128 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 00:59:25.372137 | orchestrator | Monday 02 February 2026 00:51:13 +0000 (0:00:00.789) 0:03:37.932 ******* 2026-02-02 00:59:25.372146 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.372186 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.372197 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.372202 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.372208 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.372213 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.372219 | orchestrator | 2026-02-02 00:59:25.372225 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 00:59:25.372231 | orchestrator | Monday 02 February 2026 00:51:14 +0000 (0:00:00.865) 0:03:38.797 ******* 2026-02-02 00:59:25.372243 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.372249 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.372254 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.372260 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.372265 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.372271 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.372276 | orchestrator | 2026-02-02 00:59:25.372282 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 00:59:25.372288 | orchestrator | Monday 02 February 2026 00:51:15 +0000 (0:00:00.735) 0:03:39.533 ******* 2026-02-02 00:59:25.372293 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 00:59:25.372299 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 00:59:25.372304 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-02 00:59:25.372310 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.372315 | orchestrator | 2026-02-02 00:59:25.372321 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 00:59:25.372326 | orchestrator | Monday 02 February 2026 00:51:15 +0000 (0:00:00.568) 0:03:40.102 ******* 2026-02-02 00:59:25.372332 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 00:59:25.372337 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 00:59:25.372343 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-02 00:59:25.372348 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.372354 | orchestrator | 2026-02-02 00:59:25.372360 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 00:59:25.372365 | orchestrator | Monday 02 February 2026 00:51:16 +0000 (0:00:00.816) 0:03:40.918 ******* 2026-02-02 00:59:25.372371 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 00:59:25.372376 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 00:59:25.372382 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-02 00:59:25.372387 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.372392 | orchestrator | 2026-02-02 00:59:25.372398 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 00:59:25.372403 | orchestrator | Monday 02 February 2026 00:51:17 +0000 (0:00:00.400) 0:03:41.319 ******* 2026-02-02 00:59:25.372409 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.372414 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.372420 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.372425 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.372431 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.372436 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.372442 | orchestrator | 2026-02-02 00:59:25.372447 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 00:59:25.372460 | orchestrator | Monday 02 February 2026 00:51:17 +0000 (0:00:00.589) 0:03:41.909 ******* 2026-02-02 00:59:25.372466 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-02 00:59:25.372472 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.372477 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-02 00:59:25.372483 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.372488 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-02 00:59:25.372493 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.372499 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 00:59:25.372505 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-02 00:59:25.372510 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-02 00:59:25.372516 | orchestrator | 2026-02-02 00:59:25.372521 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 00:59:25.372527 | orchestrator | Monday 02 February 2026 00:51:19 +0000 (0:00:02.068) 0:03:43.977 ******* 2026-02-02 00:59:25.372533 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.372538 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.372548 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.372553 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.372559 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.372564 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.372570 | orchestrator | 2026-02-02 00:59:25.372575 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 00:59:25.372581 | orchestrator | Monday 02 February 2026 00:51:22 +0000 (0:00:02.847) 0:03:46.825 ******* 2026-02-02 00:59:25.372586 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.372592 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.372597 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.372603 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.372608 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.372614 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.372619 | orchestrator | 2026-02-02 00:59:25.372625 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-02 00:59:25.372630 | orchestrator | Monday 02 February 2026 00:51:23 +0000 (0:00:00.973) 0:03:47.799 ******* 2026-02-02 00:59:25.372636 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.372641 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.372646 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.372652 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:25.372658 | orchestrator | 2026-02-02 00:59:25.372663 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-02 00:59:25.372669 | orchestrator | Monday 02 February 2026 00:51:24 +0000 (0:00:00.990) 0:03:48.789 ******* 2026-02-02 00:59:25.372674 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.372680 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.372686 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.372691 | orchestrator | 2026-02-02 00:59:25.372716 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-02 00:59:25.372722 | orchestrator | Monday 02 February 2026 00:51:24 +0000 (0:00:00.335) 0:03:49.125 ******* 2026-02-02 00:59:25.372728 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.372733 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.372739 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.372744 | orchestrator | 2026-02-02 00:59:25.372750 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-02 00:59:25.372756 | orchestrator | Monday 02 February 2026 00:51:26 +0000 (0:00:01.190) 0:03:50.316 ******* 2026-02-02 00:59:25.372761 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 00:59:25.372770 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 00:59:25.372779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 00:59:25.372789 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.372798 | orchestrator | 2026-02-02 00:59:25.372807 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-02 00:59:25.372816 | orchestrator | Monday 02 February 2026 00:51:26 +0000 (0:00:00.952) 0:03:51.268 ******* 2026-02-02 00:59:25.372824 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.372832 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.372842 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.372852 | orchestrator | 2026-02-02 00:59:25.372862 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-02 00:59:25.372871 | orchestrator | Monday 02 February 2026 00:51:27 +0000 (0:00:00.594) 0:03:51.862 ******* 2026-02-02 00:59:25.372879 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.372888 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.372897 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.372907 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.372917 | orchestrator | 2026-02-02 00:59:25.372933 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-02 00:59:25.372942 | orchestrator | Monday 02 February 2026 00:51:28 +0000 (0:00:00.969) 0:03:52.832 ******* 2026-02-02 00:59:25.372951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 00:59:25.372960 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 00:59:25.372970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 00:59:25.372979 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.372988 | orchestrator | 2026-02-02 00:59:25.372996 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-02 00:59:25.373006 | orchestrator | Monday 02 February 2026 00:51:29 +0000 (0:00:00.705) 0:03:53.537 ******* 2026-02-02 00:59:25.373012 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373018 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.373023 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.373028 | orchestrator | 2026-02-02 00:59:25.373034 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-02 00:59:25.373039 | orchestrator | Monday 02 February 2026 00:51:29 +0000 (0:00:00.626) 0:03:54.163 ******* 2026-02-02 00:59:25.373045 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373050 | orchestrator | 2026-02-02 00:59:25.373062 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-02 00:59:25.373093 | orchestrator | Monday 02 February 2026 00:51:30 +0000 (0:00:00.239) 0:03:54.402 ******* 2026-02-02 00:59:25.373103 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373113 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.373122 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.373130 | orchestrator | 2026-02-02 00:59:25.373138 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-02 00:59:25.373149 | orchestrator | Monday 02 February 2026 00:51:30 +0000 (0:00:00.483) 0:03:54.885 ******* 2026-02-02 00:59:25.373155 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373160 | orchestrator | 2026-02-02 00:59:25.373166 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-02 00:59:25.373171 | orchestrator | Monday 02 February 2026 00:51:30 +0000 (0:00:00.216) 0:03:55.102 ******* 2026-02-02 00:59:25.373177 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373182 | orchestrator | 2026-02-02 00:59:25.373188 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-02 00:59:25.373194 | orchestrator | Monday 02 February 2026 00:51:30 +0000 (0:00:00.182) 0:03:55.284 ******* 2026-02-02 00:59:25.373199 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373205 | orchestrator | 2026-02-02 00:59:25.373210 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-02 00:59:25.373216 | orchestrator | Monday 02 February 2026 00:51:31 +0000 (0:00:00.105) 0:03:55.390 ******* 2026-02-02 00:59:25.373221 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373228 | orchestrator | 2026-02-02 00:59:25.373237 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-02 00:59:25.373246 | orchestrator | Monday 02 February 2026 00:51:31 +0000 (0:00:00.204) 0:03:55.595 ******* 2026-02-02 00:59:25.373255 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373264 | orchestrator | 2026-02-02 00:59:25.373273 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-02 00:59:25.373282 | orchestrator | Monday 02 February 2026 00:51:31 +0000 (0:00:00.214) 0:03:55.810 ******* 2026-02-02 00:59:25.373291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 00:59:25.373300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 00:59:25.373309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 00:59:25.373318 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373327 | orchestrator | 2026-02-02 00:59:25.373338 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-02 00:59:25.373348 | orchestrator | Monday 02 February 2026 00:51:32 +0000 (0:00:00.635) 0:03:56.445 ******* 2026-02-02 00:59:25.373354 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373388 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.373394 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.373400 | orchestrator | 2026-02-02 00:59:25.373406 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-02 00:59:25.373411 | orchestrator | Monday 02 February 2026 00:51:32 +0000 (0:00:00.609) 0:03:57.055 ******* 2026-02-02 00:59:25.373417 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373423 | orchestrator | 2026-02-02 00:59:25.373428 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-02 00:59:25.373434 | orchestrator | Monday 02 February 2026 00:51:32 +0000 (0:00:00.229) 0:03:57.285 ******* 2026-02-02 00:59:25.373439 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373445 | orchestrator | 2026-02-02 00:59:25.373450 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-02 00:59:25.373456 | orchestrator | Monday 02 February 2026 00:51:33 +0000 (0:00:00.197) 0:03:57.483 ******* 2026-02-02 00:59:25.373461 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.373467 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.373472 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.373478 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.373484 | orchestrator | 2026-02-02 00:59:25.373489 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-02 00:59:25.373495 | orchestrator | Monday 02 February 2026 00:51:34 +0000 (0:00:00.927) 0:03:58.410 ******* 2026-02-02 00:59:25.373500 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.373506 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.373511 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.373517 | orchestrator | 2026-02-02 00:59:25.373522 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-02 00:59:25.373528 | orchestrator | Monday 02 February 2026 00:51:34 +0000 (0:00:00.332) 0:03:58.743 ******* 2026-02-02 00:59:25.373533 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.373539 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.373544 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.373550 | orchestrator | 2026-02-02 00:59:25.373555 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-02 00:59:25.373561 | orchestrator | Monday 02 February 2026 00:51:35 +0000 (0:00:01.271) 0:04:00.014 ******* 2026-02-02 00:59:25.373566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 00:59:25.373572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 00:59:25.373577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 00:59:25.373583 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373588 | orchestrator | 2026-02-02 00:59:25.373594 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-02 00:59:25.373599 | orchestrator | Monday 02 February 2026 00:51:36 +0000 (0:00:00.942) 0:04:00.957 ******* 2026-02-02 00:59:25.373605 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.373610 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.373616 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.373621 | orchestrator | 2026-02-02 00:59:25.373627 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-02 00:59:25.373636 | orchestrator | Monday 02 February 2026 00:51:36 +0000 (0:00:00.342) 0:04:01.299 ******* 2026-02-02 00:59:25.373642 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.373647 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.373653 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.373659 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.373664 | orchestrator | 2026-02-02 00:59:25.373675 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-02 00:59:25.373681 | orchestrator | Monday 02 February 2026 00:51:38 +0000 (0:00:01.154) 0:04:02.453 ******* 2026-02-02 00:59:25.373687 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.373692 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.373698 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.373707 | orchestrator | 2026-02-02 00:59:25.373716 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-02 00:59:25.373730 | orchestrator | Monday 02 February 2026 00:51:38 +0000 (0:00:00.301) 0:04:02.755 ******* 2026-02-02 00:59:25.373742 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.373750 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.373757 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.373765 | orchestrator | 2026-02-02 00:59:25.373773 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-02 00:59:25.373782 | orchestrator | Monday 02 February 2026 00:51:39 +0000 (0:00:01.402) 0:04:04.158 ******* 2026-02-02 00:59:25.373791 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 00:59:25.373800 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 00:59:25.373809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 00:59:25.373818 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373826 | orchestrator | 2026-02-02 00:59:25.373835 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-02 00:59:25.373843 | orchestrator | Monday 02 February 2026 00:51:40 +0000 (0:00:00.584) 0:04:04.743 ******* 2026-02-02 00:59:25.373852 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.373861 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.373871 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.373879 | orchestrator | 2026-02-02 00:59:25.373888 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-02 00:59:25.373897 | orchestrator | Monday 02 February 2026 00:51:40 +0000 (0:00:00.328) 0:04:05.071 ******* 2026-02-02 00:59:25.373906 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.373918 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.373930 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.373939 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.373948 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.373957 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.373965 | orchestrator | 2026-02-02 00:59:25.374009 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-02 00:59:25.374055 | orchestrator | Monday 02 February 2026 00:51:41 +0000 (0:00:00.585) 0:04:05.656 ******* 2026-02-02 00:59:25.374064 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.374115 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.374125 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.374134 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:25.374143 | orchestrator | 2026-02-02 00:59:25.374153 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-02 00:59:25.374160 | orchestrator | Monday 02 February 2026 00:51:42 +0000 (0:00:01.052) 0:04:06.709 ******* 2026-02-02 00:59:25.374165 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.374185 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.374191 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.374196 | orchestrator | 2026-02-02 00:59:25.374202 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-02 00:59:25.374208 | orchestrator | Monday 02 February 2026 00:51:42 +0000 (0:00:00.302) 0:04:07.012 ******* 2026-02-02 00:59:25.374213 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.374219 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.374233 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.374239 | orchestrator | 2026-02-02 00:59:25.374244 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-02 00:59:25.374258 | orchestrator | Monday 02 February 2026 00:51:44 +0000 (0:00:01.598) 0:04:08.611 ******* 2026-02-02 00:59:25.374264 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 00:59:25.374269 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 00:59:25.374275 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 00:59:25.374281 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.374286 | orchestrator | 2026-02-02 00:59:25.374292 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-02 00:59:25.374297 | orchestrator | Monday 02 February 2026 00:51:44 +0000 (0:00:00.615) 0:04:09.226 ******* 2026-02-02 00:59:25.374303 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.374308 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.374314 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.374319 | orchestrator | 2026-02-02 00:59:25.374325 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-02 00:59:25.374330 | orchestrator | 2026-02-02 00:59:25.374336 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 00:59:25.374342 | orchestrator | Monday 02 February 2026 00:51:45 +0000 (0:00:00.653) 0:04:09.879 ******* 2026-02-02 00:59:25.374356 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:25.374362 | orchestrator | 2026-02-02 00:59:25.374367 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 00:59:25.374373 | orchestrator | Monday 02 February 2026 00:51:46 +0000 (0:00:00.846) 0:04:10.726 ******* 2026-02-02 00:59:25.374379 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:25.374384 | orchestrator | 2026-02-02 00:59:25.374395 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 00:59:25.374400 | orchestrator | Monday 02 February 2026 00:51:47 +0000 (0:00:00.663) 0:04:11.390 ******* 2026-02-02 00:59:25.374406 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.374411 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.374417 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.374422 | orchestrator | 2026-02-02 00:59:25.374428 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 00:59:25.374433 | orchestrator | Monday 02 February 2026 00:51:47 +0000 (0:00:00.824) 0:04:12.214 ******* 2026-02-02 00:59:25.374439 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.374444 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.374450 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.374455 | orchestrator | 2026-02-02 00:59:25.374461 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 00:59:25.374466 | orchestrator | Monday 02 February 2026 00:51:48 +0000 (0:00:00.659) 0:04:12.874 ******* 2026-02-02 00:59:25.374472 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.374477 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.374483 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.374488 | orchestrator | 2026-02-02 00:59:25.374493 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 00:59:25.374499 | orchestrator | Monday 02 February 2026 00:51:49 +0000 (0:00:00.469) 0:04:13.343 ******* 2026-02-02 00:59:25.374504 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.374510 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.374515 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.374521 | orchestrator | 2026-02-02 00:59:25.374526 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 00:59:25.374532 | orchestrator | Monday 02 February 2026 00:51:49 +0000 (0:00:00.410) 0:04:13.753 ******* 2026-02-02 00:59:25.374537 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.374543 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.374548 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.374557 | orchestrator | 2026-02-02 00:59:25.374563 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 00:59:25.374568 | orchestrator | Monday 02 February 2026 00:51:50 +0000 (0:00:01.014) 0:04:14.767 ******* 2026-02-02 00:59:25.374574 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.374579 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.374585 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.374590 | orchestrator | 2026-02-02 00:59:25.374596 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 00:59:25.374601 | orchestrator | Monday 02 February 2026 00:51:51 +0000 (0:00:00.686) 0:04:15.454 ******* 2026-02-02 00:59:25.374607 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.374613 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.374618 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.374624 | orchestrator | 2026-02-02 00:59:25.374661 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 00:59:25.374668 | orchestrator | Monday 02 February 2026 00:51:51 +0000 (0:00:00.393) 0:04:15.848 ******* 2026-02-02 00:59:25.374673 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.374679 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.374684 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.374690 | orchestrator | 2026-02-02 00:59:25.374695 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 00:59:25.374701 | orchestrator | Monday 02 February 2026 00:51:52 +0000 (0:00:00.706) 0:04:16.554 ******* 2026-02-02 00:59:25.374706 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.374711 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.374717 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.374722 | orchestrator | 2026-02-02 00:59:25.374728 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 00:59:25.374733 | orchestrator | Monday 02 February 2026 00:51:52 +0000 (0:00:00.694) 0:04:17.249 ******* 2026-02-02 00:59:25.374739 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.374744 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.374750 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.374755 | orchestrator | 2026-02-02 00:59:25.374761 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 00:59:25.374775 | orchestrator | Monday 02 February 2026 00:51:53 +0000 (0:00:00.468) 0:04:17.717 ******* 2026-02-02 00:59:25.374781 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.374787 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.374792 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.374798 | orchestrator | 2026-02-02 00:59:25.374803 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 00:59:25.374809 | orchestrator | Monday 02 February 2026 00:51:53 +0000 (0:00:00.334) 0:04:18.051 ******* 2026-02-02 00:59:25.374814 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.374820 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.374825 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.374831 | orchestrator | 2026-02-02 00:59:25.374836 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 00:59:25.374842 | orchestrator | Monday 02 February 2026 00:51:54 +0000 (0:00:00.356) 0:04:18.407 ******* 2026-02-02 00:59:25.374847 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.374853 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.374858 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.374864 | orchestrator | 2026-02-02 00:59:25.374870 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 00:59:25.374875 | orchestrator | Monday 02 February 2026 00:51:54 +0000 (0:00:00.313) 0:04:18.721 ******* 2026-02-02 00:59:25.374881 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.374886 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.374892 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.374897 | orchestrator | 2026-02-02 00:59:25.374902 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 00:59:25.374912 | orchestrator | Monday 02 February 2026 00:51:55 +0000 (0:00:00.591) 0:04:19.312 ******* 2026-02-02 00:59:25.374918 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.374923 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.374929 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.374934 | orchestrator | 2026-02-02 00:59:25.374943 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 00:59:25.374949 | orchestrator | Monday 02 February 2026 00:51:55 +0000 (0:00:00.328) 0:04:19.640 ******* 2026-02-02 00:59:25.374954 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.374960 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.374965 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.374971 | orchestrator | 2026-02-02 00:59:25.374976 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 00:59:25.374982 | orchestrator | Monday 02 February 2026 00:51:55 +0000 (0:00:00.388) 0:04:20.028 ******* 2026-02-02 00:59:25.374987 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.374993 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.374998 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.375004 | orchestrator | 2026-02-02 00:59:25.375009 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 00:59:25.375015 | orchestrator | Monday 02 February 2026 00:51:56 +0000 (0:00:00.348) 0:04:20.377 ******* 2026-02-02 00:59:25.375020 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.375026 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.375031 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.375037 | orchestrator | 2026-02-02 00:59:25.375042 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 00:59:25.375048 | orchestrator | Monday 02 February 2026 00:51:56 +0000 (0:00:00.502) 0:04:20.879 ******* 2026-02-02 00:59:25.375057 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.375066 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.375117 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.375126 | orchestrator | 2026-02-02 00:59:25.375135 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-02 00:59:25.375144 | orchestrator | Monday 02 February 2026 00:51:57 +0000 (0:00:01.196) 0:04:22.075 ******* 2026-02-02 00:59:25.375153 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.375162 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.375170 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.375189 | orchestrator | 2026-02-02 00:59:25.375208 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-02 00:59:25.375217 | orchestrator | Monday 02 February 2026 00:51:58 +0000 (0:00:00.501) 0:04:22.577 ******* 2026-02-02 00:59:25.375225 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:25.375231 | orchestrator | 2026-02-02 00:59:25.375237 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-02 00:59:25.375243 | orchestrator | Monday 02 February 2026 00:51:59 +0000 (0:00:01.033) 0:04:23.611 ******* 2026-02-02 00:59:25.375254 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.375259 | orchestrator | 2026-02-02 00:59:25.375264 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-02 00:59:25.375296 | orchestrator | Monday 02 February 2026 00:51:59 +0000 (0:00:00.207) 0:04:23.818 ******* 2026-02-02 00:59:25.375302 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-02 00:59:25.375308 | orchestrator | 2026-02-02 00:59:25.375314 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-02 00:59:25.375322 | orchestrator | Monday 02 February 2026 00:52:00 +0000 (0:00:01.245) 0:04:25.063 ******* 2026-02-02 00:59:25.375333 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.375342 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.375349 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.375357 | orchestrator | 2026-02-02 00:59:25.375372 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-02 00:59:25.375380 | orchestrator | Monday 02 February 2026 00:52:01 +0000 (0:00:00.365) 0:04:25.429 ******* 2026-02-02 00:59:25.375387 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.375394 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.375400 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.375408 | orchestrator | 2026-02-02 00:59:25.375415 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-02 00:59:25.375423 | orchestrator | Monday 02 February 2026 00:52:01 +0000 (0:00:00.445) 0:04:25.875 ******* 2026-02-02 00:59:25.375430 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.375438 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.375457 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.375467 | orchestrator | 2026-02-02 00:59:25.375472 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-02 00:59:25.375477 | orchestrator | Monday 02 February 2026 00:52:02 +0000 (0:00:01.387) 0:04:27.262 ******* 2026-02-02 00:59:25.375482 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.375487 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.375492 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.375497 | orchestrator | 2026-02-02 00:59:25.375502 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-02 00:59:25.375507 | orchestrator | Monday 02 February 2026 00:52:03 +0000 (0:00:00.889) 0:04:28.151 ******* 2026-02-02 00:59:25.375519 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.375524 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.375528 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.375533 | orchestrator | 2026-02-02 00:59:25.375538 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-02 00:59:25.375543 | orchestrator | Monday 02 February 2026 00:52:04 +0000 (0:00:01.141) 0:04:29.292 ******* 2026-02-02 00:59:25.375548 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.375553 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.375558 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.375563 | orchestrator | 2026-02-02 00:59:25.375568 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-02 00:59:25.375573 | orchestrator | Monday 02 February 2026 00:52:05 +0000 (0:00:00.832) 0:04:30.125 ******* 2026-02-02 00:59:25.375578 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.375583 | orchestrator | 2026-02-02 00:59:25.375588 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-02 00:59:25.375593 | orchestrator | Monday 02 February 2026 00:52:06 +0000 (0:00:01.134) 0:04:31.260 ******* 2026-02-02 00:59:25.375597 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.375602 | orchestrator | 2026-02-02 00:59:25.375607 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-02 00:59:25.375617 | orchestrator | Monday 02 February 2026 00:52:07 +0000 (0:00:00.685) 0:04:31.945 ******* 2026-02-02 00:59:25.375622 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 00:59:25.375627 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 00:59:25.375632 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 00:59:25.375637 | orchestrator | changed: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 00:59:25.375642 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 00:59:25.375647 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-02 00:59:25.375652 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-02 00:59:25.375658 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-02-02 00:59:25.375663 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 00:59:25.375668 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-02 00:59:25.375673 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 00:59:25.375682 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-02 00:59:25.375687 | orchestrator | 2026-02-02 00:59:25.375692 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-02 00:59:25.375697 | orchestrator | Monday 02 February 2026 00:52:11 +0000 (0:00:04.143) 0:04:36.088 ******* 2026-02-02 00:59:25.375702 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.375706 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.375718 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.375723 | orchestrator | 2026-02-02 00:59:25.375728 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-02 00:59:25.375732 | orchestrator | Monday 02 February 2026 00:52:13 +0000 (0:00:01.280) 0:04:37.369 ******* 2026-02-02 00:59:25.375737 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.375742 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.375747 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.375752 | orchestrator | 2026-02-02 00:59:25.375757 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-02 00:59:25.375762 | orchestrator | Monday 02 February 2026 00:52:13 +0000 (0:00:00.414) 0:04:37.784 ******* 2026-02-02 00:59:25.375767 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.375772 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.375777 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.375782 | orchestrator | 2026-02-02 00:59:25.375787 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-02 00:59:25.375791 | orchestrator | Monday 02 February 2026 00:52:13 +0000 (0:00:00.385) 0:04:38.169 ******* 2026-02-02 00:59:25.375796 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.375801 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.375806 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.375811 | orchestrator | 2026-02-02 00:59:25.375841 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-02 00:59:25.375847 | orchestrator | Monday 02 February 2026 00:52:17 +0000 (0:00:03.598) 0:04:41.768 ******* 2026-02-02 00:59:25.375852 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.375857 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.375862 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.375866 | orchestrator | 2026-02-02 00:59:25.375871 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-02 00:59:25.375876 | orchestrator | Monday 02 February 2026 00:52:18 +0000 (0:00:01.242) 0:04:43.010 ******* 2026-02-02 00:59:25.375881 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.375886 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.375891 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.375896 | orchestrator | 2026-02-02 00:59:25.375900 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-02 00:59:25.375905 | orchestrator | Monday 02 February 2026 00:52:19 +0000 (0:00:00.356) 0:04:43.367 ******* 2026-02-02 00:59:25.375910 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:25.375915 | orchestrator | 2026-02-02 00:59:25.375920 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-02 00:59:25.375925 | orchestrator | Monday 02 February 2026 00:52:19 +0000 (0:00:00.692) 0:04:44.059 ******* 2026-02-02 00:59:25.375930 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.375935 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.375939 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.375944 | orchestrator | 2026-02-02 00:59:25.375949 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-02 00:59:25.375954 | orchestrator | Monday 02 February 2026 00:52:20 +0000 (0:00:00.904) 0:04:44.963 ******* 2026-02-02 00:59:25.375959 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.375971 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.375976 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.375981 | orchestrator | 2026-02-02 00:59:25.375986 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-02 00:59:25.376000 | orchestrator | Monday 02 February 2026 00:52:21 +0000 (0:00:00.436) 0:04:45.400 ******* 2026-02-02 00:59:25.376005 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:25.376011 | orchestrator | 2026-02-02 00:59:25.376016 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-02 00:59:25.376021 | orchestrator | Monday 02 February 2026 00:52:21 +0000 (0:00:00.646) 0:04:46.047 ******* 2026-02-02 00:59:25.376026 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.376031 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.376036 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.376041 | orchestrator | 2026-02-02 00:59:25.376046 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-02 00:59:25.376050 | orchestrator | Monday 02 February 2026 00:52:24 +0000 (0:00:02.425) 0:04:48.472 ******* 2026-02-02 00:59:25.376055 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.376060 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.376065 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.376092 | orchestrator | 2026-02-02 00:59:25.376104 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-02 00:59:25.376112 | orchestrator | Monday 02 February 2026 00:52:25 +0000 (0:00:01.795) 0:04:50.268 ******* 2026-02-02 00:59:25.376117 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.376122 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.376127 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.376132 | orchestrator | 2026-02-02 00:59:25.376137 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-02 00:59:25.376142 | orchestrator | Monday 02 February 2026 00:52:27 +0000 (0:00:01.567) 0:04:51.836 ******* 2026-02-02 00:59:25.376147 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.376151 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.376156 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.376161 | orchestrator | 2026-02-02 00:59:25.376166 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-02 00:59:25.376171 | orchestrator | Monday 02 February 2026 00:52:29 +0000 (0:00:01.833) 0:04:53.669 ******* 2026-02-02 00:59:25.376176 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:25.376181 | orchestrator | 2026-02-02 00:59:25.376186 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-02 00:59:25.376191 | orchestrator | Monday 02 February 2026 00:52:30 +0000 (0:00:01.062) 0:04:54.732 ******* 2026-02-02 00:59:25.376196 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-02 00:59:25.376202 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.376210 | orchestrator | 2026-02-02 00:59:25.376222 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-02 00:59:25.376231 | orchestrator | Monday 02 February 2026 00:52:52 +0000 (0:00:21.940) 0:05:16.673 ******* 2026-02-02 00:59:25.376239 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.376247 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.376255 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.376261 | orchestrator | 2026-02-02 00:59:25.376269 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-02 00:59:25.376276 | orchestrator | Monday 02 February 2026 00:53:01 +0000 (0:00:08.915) 0:05:25.589 ******* 2026-02-02 00:59:25.376284 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.376291 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.376299 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.376307 | orchestrator | 2026-02-02 00:59:25.376315 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-02 00:59:25.376324 | orchestrator | Monday 02 February 2026 00:53:01 +0000 (0:00:00.328) 0:05:25.918 ******* 2026-02-02 00:59:25.376368 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ee16b14284a84a7787de3b96fd9b0d4af96b7f5'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-02 00:59:25.376377 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ee16b14284a84a7787de3b96fd9b0d4af96b7f5'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-02 00:59:25.376384 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ee16b14284a84a7787de3b96fd9b0d4af96b7f5'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-02 00:59:25.376391 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ee16b14284a84a7787de3b96fd9b0d4af96b7f5'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-02 00:59:25.376397 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ee16b14284a84a7787de3b96fd9b0d4af96b7f5'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-02 00:59:25.376407 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ee16b14284a84a7787de3b96fd9b0d4af96b7f5'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1ee16b14284a84a7787de3b96fd9b0d4af96b7f5'}])  2026-02-02 00:59:25.376415 | orchestrator | 2026-02-02 00:59:25.376423 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 00:59:25.376431 | orchestrator | Monday 02 February 2026 00:53:16 +0000 (0:00:15.011) 0:05:40.929 ******* 2026-02-02 00:59:25.376439 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.376447 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.376455 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.376462 | orchestrator | 2026-02-02 00:59:25.376470 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-02 00:59:25.376477 | orchestrator | Monday 02 February 2026 00:53:16 +0000 (0:00:00.361) 0:05:41.290 ******* 2026-02-02 00:59:25.376484 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:25.376492 | orchestrator | 2026-02-02 00:59:25.376500 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-02 00:59:25.376508 | orchestrator | Monday 02 February 2026 00:53:17 +0000 (0:00:00.826) 0:05:42.116 ******* 2026-02-02 00:59:25.376516 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.376525 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.376532 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.376539 | orchestrator | 2026-02-02 00:59:25.376546 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-02 00:59:25.376554 | orchestrator | Monday 02 February 2026 00:53:18 +0000 (0:00:00.337) 0:05:42.454 ******* 2026-02-02 00:59:25.376562 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.376576 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.376584 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.376592 | orchestrator | 2026-02-02 00:59:25.376600 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-02 00:59:25.376609 | orchestrator | Monday 02 February 2026 00:53:18 +0000 (0:00:00.342) 0:05:42.796 ******* 2026-02-02 00:59:25.376617 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 00:59:25.376624 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 00:59:25.376631 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 00:59:25.376639 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.376646 | orchestrator | 2026-02-02 00:59:25.376653 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-02 00:59:25.376659 | orchestrator | Monday 02 February 2026 00:53:19 +0000 (0:00:01.008) 0:05:43.804 ******* 2026-02-02 00:59:25.376666 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.376673 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.376681 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.376689 | orchestrator | 2026-02-02 00:59:25.376725 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-02 00:59:25.376735 | orchestrator | 2026-02-02 00:59:25.376743 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 00:59:25.376751 | orchestrator | Monday 02 February 2026 00:53:20 +0000 (0:00:00.882) 0:05:44.687 ******* 2026-02-02 00:59:25.376760 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:25.376769 | orchestrator | 2026-02-02 00:59:25.376776 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 00:59:25.376781 | orchestrator | Monday 02 February 2026 00:53:20 +0000 (0:00:00.574) 0:05:45.262 ******* 2026-02-02 00:59:25.376786 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-02-02 00:59:25.376791 | orchestrator | 2026-02-02 00:59:25.376796 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 00:59:25.376800 | orchestrator | Monday 02 February 2026 00:53:21 +0000 (0:00:00.880) 0:05:46.142 ******* 2026-02-02 00:59:25.376805 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.376810 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.376815 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.376820 | orchestrator | 2026-02-02 00:59:25.376825 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 00:59:25.376829 | orchestrator | Monday 02 February 2026 00:53:22 +0000 (0:00:00.811) 0:05:46.953 ******* 2026-02-02 00:59:25.376834 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.376839 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.376844 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.376849 | orchestrator | 2026-02-02 00:59:25.376854 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 00:59:25.376859 | orchestrator | Monday 02 February 2026 00:53:22 +0000 (0:00:00.340) 0:05:47.294 ******* 2026-02-02 00:59:25.376863 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.376868 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.376873 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.376878 | orchestrator | 2026-02-02 00:59:25.376883 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 00:59:25.376888 | orchestrator | Monday 02 February 2026 00:53:23 +0000 (0:00:00.336) 0:05:47.630 ******* 2026-02-02 00:59:25.376892 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.376897 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.376902 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.376907 | orchestrator | 2026-02-02 00:59:25.376912 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 00:59:25.376922 | orchestrator | Monday 02 February 2026 00:53:23 +0000 (0:00:00.654) 0:05:48.285 ******* 2026-02-02 00:59:25.376928 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.376933 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.376937 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.376942 | orchestrator | 2026-02-02 00:59:25.376947 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 00:59:25.376952 | orchestrator | Monday 02 February 2026 00:53:24 +0000 (0:00:00.701) 0:05:48.987 ******* 2026-02-02 00:59:25.376960 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.376968 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.376981 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.376989 | orchestrator | 2026-02-02 00:59:25.376996 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 00:59:25.377004 | orchestrator | Monday 02 February 2026 00:53:25 +0000 (0:00:00.333) 0:05:49.320 ******* 2026-02-02 00:59:25.377013 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.377021 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.377029 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.377036 | orchestrator | 2026-02-02 00:59:25.377043 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 00:59:25.377052 | orchestrator | Monday 02 February 2026 00:53:25 +0000 (0:00:00.383) 0:05:49.704 ******* 2026-02-02 00:59:25.377059 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.377066 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.377119 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.377127 | orchestrator | 2026-02-02 00:59:25.377135 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 00:59:25.377144 | orchestrator | Monday 02 February 2026 00:53:26 +0000 (0:00:01.082) 0:05:50.786 ******* 2026-02-02 00:59:25.377149 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.377154 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.377159 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.377163 | orchestrator | 2026-02-02 00:59:25.377168 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 00:59:25.377173 | orchestrator | Monday 02 February 2026 00:53:27 +0000 (0:00:00.678) 0:05:51.464 ******* 2026-02-02 00:59:25.377178 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.377183 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.377188 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.377193 | orchestrator | 2026-02-02 00:59:25.377198 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 00:59:25.377203 | orchestrator | Monday 02 February 2026 00:53:27 +0000 (0:00:00.324) 0:05:51.789 ******* 2026-02-02 00:59:25.377208 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.377213 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.377217 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.377222 | orchestrator | 2026-02-02 00:59:25.377227 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 00:59:25.377232 | orchestrator | Monday 02 February 2026 00:53:27 +0000 (0:00:00.366) 0:05:52.156 ******* 2026-02-02 00:59:25.377237 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.377242 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.377246 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.377251 | orchestrator | 2026-02-02 00:59:25.377256 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 00:59:25.377261 | orchestrator | Monday 02 February 2026 00:53:28 +0000 (0:00:00.475) 0:05:52.632 ******* 2026-02-02 00:59:25.377266 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.377271 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.377313 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.377319 | orchestrator | 2026-02-02 00:59:25.377324 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 00:59:25.377329 | orchestrator | Monday 02 February 2026 00:53:28 +0000 (0:00:00.324) 0:05:52.956 ******* 2026-02-02 00:59:25.377343 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.377351 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.377359 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.377366 | orchestrator | 2026-02-02 00:59:25.377374 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 00:59:25.377382 | orchestrator | Monday 02 February 2026 00:53:28 +0000 (0:00:00.284) 0:05:53.241 ******* 2026-02-02 00:59:25.377389 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.377396 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.377404 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.377411 | orchestrator | 2026-02-02 00:59:25.377418 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 00:59:25.377426 | orchestrator | Monday 02 February 2026 00:53:29 +0000 (0:00:00.346) 0:05:53.588 ******* 2026-02-02 00:59:25.377434 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.377442 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.377461 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.377468 | orchestrator | 2026-02-02 00:59:25.377474 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 00:59:25.377478 | orchestrator | Monday 02 February 2026 00:53:29 +0000 (0:00:00.322) 0:05:53.911 ******* 2026-02-02 00:59:25.377483 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.377488 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.377493 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.377498 | orchestrator | 2026-02-02 00:59:25.377503 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 00:59:25.377508 | orchestrator | Monday 02 February 2026 00:53:30 +0000 (0:00:00.533) 0:05:54.444 ******* 2026-02-02 00:59:25.377513 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.377518 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.377522 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.377527 | orchestrator | 2026-02-02 00:59:25.377532 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 00:59:25.377537 | orchestrator | Monday 02 February 2026 00:53:30 +0000 (0:00:00.337) 0:05:54.782 ******* 2026-02-02 00:59:25.377542 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.377547 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.377552 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.377557 | orchestrator | 2026-02-02 00:59:25.377562 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-02 00:59:25.377567 | orchestrator | Monday 02 February 2026 00:53:30 +0000 (0:00:00.524) 0:05:55.306 ******* 2026-02-02 00:59:25.377572 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 00:59:25.377577 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 00:59:25.377583 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 00:59:25.377587 | orchestrator | 2026-02-02 00:59:25.377592 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-02 00:59:25.377597 | orchestrator | Monday 02 February 2026 00:53:31 +0000 (0:00:00.734) 0:05:56.041 ******* 2026-02-02 00:59:25.377607 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:25.377612 | orchestrator | 2026-02-02 00:59:25.377617 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-02 00:59:25.377621 | orchestrator | Monday 02 February 2026 00:53:32 +0000 (0:00:00.677) 0:05:56.718 ******* 2026-02-02 00:59:25.377626 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.377631 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.377635 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.377640 | orchestrator | 2026-02-02 00:59:25.377645 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-02 00:59:25.377649 | orchestrator | Monday 02 February 2026 00:53:33 +0000 (0:00:00.668) 0:05:57.386 ******* 2026-02-02 00:59:25.377654 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.377664 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.377669 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.377674 | orchestrator | 2026-02-02 00:59:25.377678 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-02 00:59:25.377683 | orchestrator | Monday 02 February 2026 00:53:33 +0000 (0:00:00.306) 0:05:57.693 ******* 2026-02-02 00:59:25.377688 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 00:59:25.377692 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 00:59:25.377697 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 00:59:25.377702 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-02 00:59:25.377706 | orchestrator | 2026-02-02 00:59:25.377711 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-02 00:59:25.377716 | orchestrator | Monday 02 February 2026 00:53:43 +0000 (0:00:10.359) 0:06:08.052 ******* 2026-02-02 00:59:25.377720 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.377725 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.377729 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.377734 | orchestrator | 2026-02-02 00:59:25.377739 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-02 00:59:25.377750 | orchestrator | Monday 02 February 2026 00:53:44 +0000 (0:00:00.359) 0:06:08.412 ******* 2026-02-02 00:59:25.377755 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-02 00:59:25.377760 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-02 00:59:25.377764 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-02 00:59:25.377769 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-02 00:59:25.377773 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 00:59:25.377778 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 00:59:25.377783 | orchestrator | 2026-02-02 00:59:25.377809 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-02 00:59:25.377815 | orchestrator | Monday 02 February 2026 00:53:46 +0000 (0:00:01.991) 0:06:10.404 ******* 2026-02-02 00:59:25.377820 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-02 00:59:25.377825 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-02 00:59:25.377829 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-02 00:59:25.377834 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 00:59:25.377839 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-02 00:59:25.377844 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-02 00:59:25.377848 | orchestrator | 2026-02-02 00:59:25.377853 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-02 00:59:25.377858 | orchestrator | Monday 02 February 2026 00:53:47 +0000 (0:00:01.329) 0:06:11.733 ******* 2026-02-02 00:59:25.377862 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.377867 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.377872 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.377883 | orchestrator | 2026-02-02 00:59:25.377888 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-02 00:59:25.377893 | orchestrator | Monday 02 February 2026 00:53:48 +0000 (0:00:00.739) 0:06:12.472 ******* 2026-02-02 00:59:25.377897 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.377902 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.377907 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.377912 | orchestrator | 2026-02-02 00:59:25.377916 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-02 00:59:25.377921 | orchestrator | Monday 02 February 2026 00:53:48 +0000 (0:00:00.759) 0:06:13.232 ******* 2026-02-02 00:59:25.377926 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.377930 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.377935 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.377940 | orchestrator | 2026-02-02 00:59:25.377949 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-02 00:59:25.377954 | orchestrator | Monday 02 February 2026 00:53:49 +0000 (0:00:00.387) 0:06:13.620 ******* 2026-02-02 00:59:25.377959 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:25.377963 | orchestrator | 2026-02-02 00:59:25.377968 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-02 00:59:25.377973 | orchestrator | Monday 02 February 2026 00:53:49 +0000 (0:00:00.567) 0:06:14.187 ******* 2026-02-02 00:59:25.377977 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.377982 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.377987 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.377991 | orchestrator | 2026-02-02 00:59:25.377996 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-02 00:59:25.378001 | orchestrator | Monday 02 February 2026 00:53:51 +0000 (0:00:01.239) 0:06:15.427 ******* 2026-02-02 00:59:25.378005 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.378043 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.378048 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.378053 | orchestrator | 2026-02-02 00:59:25.378059 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-02 00:59:25.378089 | orchestrator | Monday 02 February 2026 00:53:51 +0000 (0:00:00.344) 0:06:15.771 ******* 2026-02-02 00:59:25.378097 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:25.378105 | orchestrator | 2026-02-02 00:59:25.378113 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-02 00:59:25.378120 | orchestrator | Monday 02 February 2026 00:53:52 +0000 (0:00:00.603) 0:06:16.375 ******* 2026-02-02 00:59:25.378128 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.378135 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.378143 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.378148 | orchestrator | 2026-02-02 00:59:25.378153 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-02 00:59:25.378157 | orchestrator | Monday 02 February 2026 00:53:54 +0000 (0:00:02.091) 0:06:18.466 ******* 2026-02-02 00:59:25.378162 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.378167 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.378171 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.378176 | orchestrator | 2026-02-02 00:59:25.378181 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-02 00:59:25.378185 | orchestrator | Monday 02 February 2026 00:53:55 +0000 (0:00:01.356) 0:06:19.823 ******* 2026-02-02 00:59:25.378190 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.378195 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.378199 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.378204 | orchestrator | 2026-02-02 00:59:25.378209 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-02 00:59:25.378213 | orchestrator | Monday 02 February 2026 00:53:57 +0000 (0:00:01.743) 0:06:21.567 ******* 2026-02-02 00:59:25.378218 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.378223 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.378227 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.378232 | orchestrator | 2026-02-02 00:59:25.378237 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-02 00:59:25.378241 | orchestrator | Monday 02 February 2026 00:53:59 +0000 (0:00:01.860) 0:06:23.427 ******* 2026-02-02 00:59:25.378246 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.378251 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.378255 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-02 00:59:25.378260 | orchestrator | 2026-02-02 00:59:25.378265 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-02 00:59:25.378274 | orchestrator | Monday 02 February 2026 00:53:59 +0000 (0:00:00.701) 0:06:24.129 ******* 2026-02-02 00:59:25.378278 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-02 00:59:25.378304 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-02 00:59:25.378310 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-02 00:59:25.378315 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-02 00:59:25.378319 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-02 00:59:25.378324 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-02 00:59:25.378329 | orchestrator | 2026-02-02 00:59:25.378333 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-02 00:59:25.378338 | orchestrator | Monday 02 February 2026 00:54:30 +0000 (0:00:30.230) 0:06:54.359 ******* 2026-02-02 00:59:25.378343 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-02 00:59:25.378347 | orchestrator | 2026-02-02 00:59:25.378352 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-02 00:59:25.378357 | orchestrator | Monday 02 February 2026 00:54:31 +0000 (0:00:01.396) 0:06:55.756 ******* 2026-02-02 00:59:25.378361 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.378366 | orchestrator | 2026-02-02 00:59:25.378371 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-02 00:59:25.378375 | orchestrator | Monday 02 February 2026 00:54:31 +0000 (0:00:00.338) 0:06:56.095 ******* 2026-02-02 00:59:25.378380 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.378384 | orchestrator | 2026-02-02 00:59:25.378389 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-02 00:59:25.378394 | orchestrator | Monday 02 February 2026 00:54:31 +0000 (0:00:00.135) 0:06:56.230 ******* 2026-02-02 00:59:25.378398 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-02 00:59:25.378403 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-02 00:59:25.378408 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-02 00:59:25.378412 | orchestrator | 2026-02-02 00:59:25.378417 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-02 00:59:25.378422 | orchestrator | Monday 02 February 2026 00:54:39 +0000 (0:00:07.160) 0:07:03.391 ******* 2026-02-02 00:59:25.378426 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-02 00:59:25.378431 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-02 00:59:25.378436 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-02 00:59:25.378440 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-02 00:59:25.378445 | orchestrator | 2026-02-02 00:59:25.378450 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 00:59:25.378454 | orchestrator | Monday 02 February 2026 00:54:44 +0000 (0:00:05.100) 0:07:08.492 ******* 2026-02-02 00:59:25.378459 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.378470 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.378479 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.378486 | orchestrator | 2026-02-02 00:59:25.378494 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-02 00:59:25.378501 | orchestrator | Monday 02 February 2026 00:54:44 +0000 (0:00:00.798) 0:07:09.290 ******* 2026-02-02 00:59:25.378509 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:25.378517 | orchestrator | 2026-02-02 00:59:25.378526 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-02 00:59:25.378540 | orchestrator | Monday 02 February 2026 00:54:45 +0000 (0:00:00.525) 0:07:09.816 ******* 2026-02-02 00:59:25.378548 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.378557 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.378565 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.378573 | orchestrator | 2026-02-02 00:59:25.378581 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-02 00:59:25.378586 | orchestrator | Monday 02 February 2026 00:54:46 +0000 (0:00:00.681) 0:07:10.497 ******* 2026-02-02 00:59:25.378591 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.378595 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.378600 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.378604 | orchestrator | 2026-02-02 00:59:25.378609 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-02 00:59:25.378614 | orchestrator | Monday 02 February 2026 00:54:47 +0000 (0:00:01.167) 0:07:11.665 ******* 2026-02-02 00:59:25.378618 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 00:59:25.378623 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 00:59:25.378627 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 00:59:25.378632 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.378636 | orchestrator | 2026-02-02 00:59:25.378641 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-02 00:59:25.378646 | orchestrator | Monday 02 February 2026 00:54:48 +0000 (0:00:00.676) 0:07:12.342 ******* 2026-02-02 00:59:25.378650 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.378655 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.378659 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.378664 | orchestrator | 2026-02-02 00:59:25.378668 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-02 00:59:25.378673 | orchestrator | 2026-02-02 00:59:25.378678 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 00:59:25.378682 | orchestrator | Monday 02 February 2026 00:54:48 +0000 (0:00:00.630) 0:07:12.972 ******* 2026-02-02 00:59:25.378687 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.378692 | orchestrator | 2026-02-02 00:59:25.378716 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 00:59:25.378722 | orchestrator | Monday 02 February 2026 00:54:49 +0000 (0:00:00.846) 0:07:13.819 ******* 2026-02-02 00:59:25.378727 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.378732 | orchestrator | 2026-02-02 00:59:25.378737 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 00:59:25.378741 | orchestrator | Monday 02 February 2026 00:54:50 +0000 (0:00:00.533) 0:07:14.352 ******* 2026-02-02 00:59:25.378746 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.378751 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.378755 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.378760 | orchestrator | 2026-02-02 00:59:25.378765 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 00:59:25.378769 | orchestrator | Monday 02 February 2026 00:54:50 +0000 (0:00:00.606) 0:07:14.958 ******* 2026-02-02 00:59:25.378774 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.378779 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.378783 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.378788 | orchestrator | 2026-02-02 00:59:25.378793 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 00:59:25.378797 | orchestrator | Monday 02 February 2026 00:54:51 +0000 (0:00:00.699) 0:07:15.658 ******* 2026-02-02 00:59:25.378802 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.378807 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.378811 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.378822 | orchestrator | 2026-02-02 00:59:25.378827 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 00:59:25.378832 | orchestrator | Monday 02 February 2026 00:54:52 +0000 (0:00:00.732) 0:07:16.391 ******* 2026-02-02 00:59:25.378837 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.378841 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.378846 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.378850 | orchestrator | 2026-02-02 00:59:25.378855 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 00:59:25.378860 | orchestrator | Monday 02 February 2026 00:54:52 +0000 (0:00:00.822) 0:07:17.213 ******* 2026-02-02 00:59:25.378864 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.378869 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.378873 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.378878 | orchestrator | 2026-02-02 00:59:25.378883 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 00:59:25.378887 | orchestrator | Monday 02 February 2026 00:54:53 +0000 (0:00:00.697) 0:07:17.910 ******* 2026-02-02 00:59:25.378892 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.378896 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.378901 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.378906 | orchestrator | 2026-02-02 00:59:25.378910 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 00:59:25.378915 | orchestrator | Monday 02 February 2026 00:54:53 +0000 (0:00:00.335) 0:07:18.246 ******* 2026-02-02 00:59:25.378919 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.378924 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.378929 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.378933 | orchestrator | 2026-02-02 00:59:25.378941 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 00:59:25.378946 | orchestrator | Monday 02 February 2026 00:54:54 +0000 (0:00:00.322) 0:07:18.569 ******* 2026-02-02 00:59:25.378950 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.378955 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.378959 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.378964 | orchestrator | 2026-02-02 00:59:25.378968 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 00:59:25.378973 | orchestrator | Monday 02 February 2026 00:54:55 +0000 (0:00:00.749) 0:07:19.318 ******* 2026-02-02 00:59:25.378978 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.378982 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.378987 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.378992 | orchestrator | 2026-02-02 00:59:25.378996 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 00:59:25.379001 | orchestrator | Monday 02 February 2026 00:54:56 +0000 (0:00:01.163) 0:07:20.482 ******* 2026-02-02 00:59:25.379006 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.379010 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.379015 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.379019 | orchestrator | 2026-02-02 00:59:25.379024 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 00:59:25.379029 | orchestrator | Monday 02 February 2026 00:54:56 +0000 (0:00:00.344) 0:07:20.826 ******* 2026-02-02 00:59:25.379033 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.379038 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.379042 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.379047 | orchestrator | 2026-02-02 00:59:25.379052 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 00:59:25.379056 | orchestrator | Monday 02 February 2026 00:54:56 +0000 (0:00:00.316) 0:07:21.143 ******* 2026-02-02 00:59:25.379061 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.379066 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.379089 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.379096 | orchestrator | 2026-02-02 00:59:25.379104 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 00:59:25.379116 | orchestrator | Monday 02 February 2026 00:54:57 +0000 (0:00:00.383) 0:07:21.527 ******* 2026-02-02 00:59:25.379123 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.379130 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.379138 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.379146 | orchestrator | 2026-02-02 00:59:25.379153 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 00:59:25.379160 | orchestrator | Monday 02 February 2026 00:54:57 +0000 (0:00:00.707) 0:07:22.234 ******* 2026-02-02 00:59:25.379169 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.379174 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.379179 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.379183 | orchestrator | 2026-02-02 00:59:25.379191 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 00:59:25.379196 | orchestrator | Monday 02 February 2026 00:54:58 +0000 (0:00:00.382) 0:07:22.617 ******* 2026-02-02 00:59:25.379201 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.379208 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.379215 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.379222 | orchestrator | 2026-02-02 00:59:25.379229 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 00:59:25.379236 | orchestrator | Monday 02 February 2026 00:54:58 +0000 (0:00:00.347) 0:07:22.965 ******* 2026-02-02 00:59:25.379242 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.379249 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.379256 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.379262 | orchestrator | 2026-02-02 00:59:25.379269 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 00:59:25.379276 | orchestrator | Monday 02 February 2026 00:54:58 +0000 (0:00:00.321) 0:07:23.286 ******* 2026-02-02 00:59:25.379283 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.379290 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.379298 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.379305 | orchestrator | 2026-02-02 00:59:25.379313 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 00:59:25.379321 | orchestrator | Monday 02 February 2026 00:54:59 +0000 (0:00:00.656) 0:07:23.943 ******* 2026-02-02 00:59:25.379328 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.379335 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.379343 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.379349 | orchestrator | 2026-02-02 00:59:25.379356 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 00:59:25.379363 | orchestrator | Monday 02 February 2026 00:54:59 +0000 (0:00:00.362) 0:07:24.306 ******* 2026-02-02 00:59:25.379370 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.379378 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.379384 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.379390 | orchestrator | 2026-02-02 00:59:25.379396 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-02 00:59:25.379403 | orchestrator | Monday 02 February 2026 00:55:00 +0000 (0:00:00.751) 0:07:25.057 ******* 2026-02-02 00:59:25.379410 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.379417 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.379425 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.379432 | orchestrator | 2026-02-02 00:59:25.379439 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-02 00:59:25.379445 | orchestrator | Monday 02 February 2026 00:55:01 +0000 (0:00:00.692) 0:07:25.750 ******* 2026-02-02 00:59:25.379452 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 00:59:25.379460 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 00:59:25.379466 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 00:59:25.379473 | orchestrator | 2026-02-02 00:59:25.379480 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-02 00:59:25.379495 | orchestrator | Monday 02 February 2026 00:55:02 +0000 (0:00:00.683) 0:07:26.434 ******* 2026-02-02 00:59:25.379507 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.379514 | orchestrator | 2026-02-02 00:59:25.379522 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-02 00:59:25.379529 | orchestrator | Monday 02 February 2026 00:55:02 +0000 (0:00:00.569) 0:07:27.003 ******* 2026-02-02 00:59:25.379536 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.379545 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.379550 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.379555 | orchestrator | 2026-02-02 00:59:25.379559 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-02 00:59:25.379564 | orchestrator | Monday 02 February 2026 00:55:03 +0000 (0:00:00.331) 0:07:27.335 ******* 2026-02-02 00:59:25.379569 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.379573 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.379580 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.379587 | orchestrator | 2026-02-02 00:59:25.379594 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-02 00:59:25.379601 | orchestrator | Monday 02 February 2026 00:55:03 +0000 (0:00:00.626) 0:07:27.961 ******* 2026-02-02 00:59:25.379609 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.379616 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.379623 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.379630 | orchestrator | 2026-02-02 00:59:25.379636 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-02 00:59:25.379643 | orchestrator | Monday 02 February 2026 00:55:04 +0000 (0:00:00.645) 0:07:28.607 ******* 2026-02-02 00:59:25.379650 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.379657 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.379664 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.379671 | orchestrator | 2026-02-02 00:59:25.379678 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-02 00:59:25.379685 | orchestrator | Monday 02 February 2026 00:55:04 +0000 (0:00:00.464) 0:07:29.071 ******* 2026-02-02 00:59:25.379692 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-02 00:59:25.379700 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-02 00:59:25.379708 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-02 00:59:25.379715 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-02 00:59:25.379722 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-02 00:59:25.379730 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-02 00:59:25.379748 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-02 00:59:25.379755 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-02 00:59:25.379763 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-02 00:59:25.379769 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-02 00:59:25.379776 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-02 00:59:25.379783 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-02 00:59:25.379790 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-02 00:59:25.379798 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-02 00:59:25.379805 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-02 00:59:25.379820 | orchestrator | 2026-02-02 00:59:25.379829 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-02 00:59:25.379837 | orchestrator | Monday 02 February 2026 00:55:07 +0000 (0:00:03.075) 0:07:32.146 ******* 2026-02-02 00:59:25.379844 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.379851 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.379860 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.379865 | orchestrator | 2026-02-02 00:59:25.379870 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-02 00:59:25.379875 | orchestrator | Monday 02 February 2026 00:55:08 +0000 (0:00:00.456) 0:07:32.603 ******* 2026-02-02 00:59:25.379879 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.379884 | orchestrator | 2026-02-02 00:59:25.379889 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-02 00:59:25.379893 | orchestrator | Monday 02 February 2026 00:55:08 +0000 (0:00:00.485) 0:07:33.088 ******* 2026-02-02 00:59:25.379898 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-02 00:59:25.379903 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-02 00:59:25.379907 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-02 00:59:25.379912 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-02 00:59:25.379917 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-02 00:59:25.379922 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-02 00:59:25.379929 | orchestrator | 2026-02-02 00:59:25.379936 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-02 00:59:25.379943 | orchestrator | Monday 02 February 2026 00:55:09 +0000 (0:00:01.022) 0:07:34.110 ******* 2026-02-02 00:59:25.379949 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 00:59:25.379957 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 00:59:25.379964 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 00:59:25.379971 | orchestrator | 2026-02-02 00:59:25.379978 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-02 00:59:25.379985 | orchestrator | Monday 02 February 2026 00:55:12 +0000 (0:00:02.580) 0:07:36.691 ******* 2026-02-02 00:59:25.379992 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-02 00:59:25.380000 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 00:59:25.380184 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.380216 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-02 00:59:25.380221 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-02 00:59:25.380225 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.380230 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-02 00:59:25.380235 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-02 00:59:25.380239 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.380244 | orchestrator | 2026-02-02 00:59:25.380249 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-02 00:59:25.380254 | orchestrator | Monday 02 February 2026 00:55:13 +0000 (0:00:01.255) 0:07:37.947 ******* 2026-02-02 00:59:25.380259 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 00:59:25.380263 | orchestrator | 2026-02-02 00:59:25.380268 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-02 00:59:25.380272 | orchestrator | Monday 02 February 2026 00:55:15 +0000 (0:00:02.345) 0:07:40.292 ******* 2026-02-02 00:59:25.380277 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.380282 | orchestrator | 2026-02-02 00:59:25.380286 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-02 00:59:25.380296 | orchestrator | Monday 02 February 2026 00:55:16 +0000 (0:00:00.596) 0:07:40.888 ******* 2026-02-02 00:59:25.380301 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-604951f0-1bde-54b3-957a-2369560b0fa2', 'data_vg': 'ceph-604951f0-1bde-54b3-957a-2369560b0fa2'}) 2026-02-02 00:59:25.380306 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-91c179ef-578a-54fb-a2b0-5b892bd3ac18', 'data_vg': 'ceph-91c179ef-578a-54fb-a2b0-5b892bd3ac18'}) 2026-02-02 00:59:25.380310 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7', 'data_vg': 'ceph-ee22aeb6-8be3-5eb7-a208-f7c11744cdf7'}) 2026-02-02 00:59:25.380322 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-edd20676-fc89-5b2b-b977-99722e90cce2', 'data_vg': 'ceph-edd20676-fc89-5b2b-b977-99722e90cce2'}) 2026-02-02 00:59:25.380327 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-91730114-ee0c-5e20-9378-f20099298830', 'data_vg': 'ceph-91730114-ee0c-5e20-9378-f20099298830'}) 2026-02-02 00:59:25.380331 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0f572543-3461-541d-9614-18cfec52b251', 'data_vg': 'ceph-0f572543-3461-541d-9614-18cfec52b251'}) 2026-02-02 00:59:25.380335 | orchestrator | 2026-02-02 00:59:25.380339 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-02 00:59:25.380344 | orchestrator | Monday 02 February 2026 00:55:52 +0000 (0:00:36.043) 0:08:16.932 ******* 2026-02-02 00:59:25.380348 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.380352 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.380356 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.380361 | orchestrator | 2026-02-02 00:59:25.380365 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-02 00:59:25.380369 | orchestrator | Monday 02 February 2026 00:55:52 +0000 (0:00:00.372) 0:08:17.305 ******* 2026-02-02 00:59:25.380373 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.380378 | orchestrator | 2026-02-02 00:59:25.380382 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-02 00:59:25.380386 | orchestrator | Monday 02 February 2026 00:55:53 +0000 (0:00:00.563) 0:08:17.868 ******* 2026-02-02 00:59:25.380390 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.380394 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.380399 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.380403 | orchestrator | 2026-02-02 00:59:25.380407 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-02 00:59:25.380411 | orchestrator | Monday 02 February 2026 00:55:54 +0000 (0:00:01.012) 0:08:18.881 ******* 2026-02-02 00:59:25.380415 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.380420 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.380424 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.380428 | orchestrator | 2026-02-02 00:59:25.380432 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-02 00:59:25.380438 | orchestrator | Monday 02 February 2026 00:55:56 +0000 (0:00:02.415) 0:08:21.297 ******* 2026-02-02 00:59:25.380444 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.380451 | orchestrator | 2026-02-02 00:59:25.380456 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-02 00:59:25.380463 | orchestrator | Monday 02 February 2026 00:55:57 +0000 (0:00:00.545) 0:08:21.843 ******* 2026-02-02 00:59:25.380469 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.380475 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.380481 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.380487 | orchestrator | 2026-02-02 00:59:25.380492 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-02 00:59:25.380502 | orchestrator | Monday 02 February 2026 00:55:59 +0000 (0:00:01.767) 0:08:23.611 ******* 2026-02-02 00:59:25.380513 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.380528 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.380536 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.380543 | orchestrator | 2026-02-02 00:59:25.380549 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-02 00:59:25.380556 | orchestrator | Monday 02 February 2026 00:56:00 +0000 (0:00:01.275) 0:08:24.886 ******* 2026-02-02 00:59:25.380562 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.380570 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.380576 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.380580 | orchestrator | 2026-02-02 00:59:25.380599 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-02 00:59:25.380603 | orchestrator | Monday 02 February 2026 00:56:02 +0000 (0:00:01.843) 0:08:26.730 ******* 2026-02-02 00:59:25.380608 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.380612 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.380616 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.380620 | orchestrator | 2026-02-02 00:59:25.380624 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-02 00:59:25.380629 | orchestrator | Monday 02 February 2026 00:56:02 +0000 (0:00:00.359) 0:08:27.089 ******* 2026-02-02 00:59:25.380633 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.380637 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.380641 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.380645 | orchestrator | 2026-02-02 00:59:25.380649 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-02 00:59:25.380654 | orchestrator | Monday 02 February 2026 00:56:03 +0000 (0:00:00.617) 0:08:27.707 ******* 2026-02-02 00:59:25.380658 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-02 00:59:25.380662 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-02 00:59:25.380666 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-02 00:59:25.380670 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 00:59:25.380674 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-02 00:59:25.380679 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-02 00:59:25.380683 | orchestrator | 2026-02-02 00:59:25.380687 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-02 00:59:25.380691 | orchestrator | Monday 02 February 2026 00:56:04 +0000 (0:00:01.141) 0:08:28.848 ******* 2026-02-02 00:59:25.380695 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-02 00:59:25.380699 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-02 00:59:25.380704 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-02 00:59:25.380708 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-02 00:59:25.380712 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-02 00:59:25.380721 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-02 00:59:25.380725 | orchestrator | 2026-02-02 00:59:25.380734 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-02 00:59:25.380739 | orchestrator | Monday 02 February 2026 00:56:06 +0000 (0:00:02.460) 0:08:31.309 ******* 2026-02-02 00:59:25.380743 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-02 00:59:25.380747 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-02 00:59:25.380752 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-02 00:59:25.380756 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-02 00:59:25.380760 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-02 00:59:25.380764 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-02 00:59:25.380768 | orchestrator | 2026-02-02 00:59:25.380773 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-02 00:59:25.380777 | orchestrator | Monday 02 February 2026 00:56:11 +0000 (0:00:04.095) 0:08:35.405 ******* 2026-02-02 00:59:25.380781 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.380785 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.380790 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-02 00:59:25.380798 | orchestrator | 2026-02-02 00:59:25.380802 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-02 00:59:25.380806 | orchestrator | Monday 02 February 2026 00:56:14 +0000 (0:00:03.383) 0:08:38.789 ******* 2026-02-02 00:59:25.380810 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.380815 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.380819 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-02 00:59:25.380823 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-02 00:59:25.380827 | orchestrator | 2026-02-02 00:59:25.380831 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-02 00:59:25.380836 | orchestrator | Monday 02 February 2026 00:56:27 +0000 (0:00:12.564) 0:08:51.353 ******* 2026-02-02 00:59:25.380840 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.380844 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.380848 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.380852 | orchestrator | 2026-02-02 00:59:25.380856 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 00:59:25.380861 | orchestrator | Monday 02 February 2026 00:56:28 +0000 (0:00:01.093) 0:08:52.446 ******* 2026-02-02 00:59:25.380865 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.380869 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.380873 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.380877 | orchestrator | 2026-02-02 00:59:25.380881 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-02 00:59:25.380886 | orchestrator | Monday 02 February 2026 00:56:28 +0000 (0:00:00.388) 0:08:52.835 ******* 2026-02-02 00:59:25.380890 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.380894 | orchestrator | 2026-02-02 00:59:25.380898 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-02 00:59:25.380902 | orchestrator | Monday 02 February 2026 00:56:29 +0000 (0:00:00.544) 0:08:53.380 ******* 2026-02-02 00:59:25.380909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 00:59:25.380914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 00:59:25.380918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 00:59:25.380922 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.380926 | orchestrator | 2026-02-02 00:59:25.380930 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-02 00:59:25.380935 | orchestrator | Monday 02 February 2026 00:56:29 +0000 (0:00:00.711) 0:08:54.091 ******* 2026-02-02 00:59:25.380939 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.380943 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.380947 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.380951 | orchestrator | 2026-02-02 00:59:25.380956 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-02 00:59:25.380960 | orchestrator | Monday 02 February 2026 00:56:30 +0000 (0:00:00.658) 0:08:54.750 ******* 2026-02-02 00:59:25.380964 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.380968 | orchestrator | 2026-02-02 00:59:25.380972 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-02 00:59:25.380977 | orchestrator | Monday 02 February 2026 00:56:30 +0000 (0:00:00.250) 0:08:55.001 ******* 2026-02-02 00:59:25.380981 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.380985 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.380989 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.380993 | orchestrator | 2026-02-02 00:59:25.380998 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-02 00:59:25.381002 | orchestrator | Monday 02 February 2026 00:56:31 +0000 (0:00:00.322) 0:08:55.323 ******* 2026-02-02 00:59:25.381006 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381014 | orchestrator | 2026-02-02 00:59:25.381018 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-02 00:59:25.381022 | orchestrator | Monday 02 February 2026 00:56:31 +0000 (0:00:00.245) 0:08:55.569 ******* 2026-02-02 00:59:25.381027 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381031 | orchestrator | 2026-02-02 00:59:25.381035 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-02 00:59:25.381039 | orchestrator | Monday 02 February 2026 00:56:31 +0000 (0:00:00.243) 0:08:55.812 ******* 2026-02-02 00:59:25.381043 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381048 | orchestrator | 2026-02-02 00:59:25.381052 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-02 00:59:25.381056 | orchestrator | Monday 02 February 2026 00:56:31 +0000 (0:00:00.134) 0:08:55.947 ******* 2026-02-02 00:59:25.381060 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381064 | orchestrator | 2026-02-02 00:59:25.381082 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-02 00:59:25.381087 | orchestrator | Monday 02 February 2026 00:56:31 +0000 (0:00:00.239) 0:08:56.187 ******* 2026-02-02 00:59:25.381094 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381098 | orchestrator | 2026-02-02 00:59:25.381102 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-02 00:59:25.381107 | orchestrator | Monday 02 February 2026 00:56:32 +0000 (0:00:00.254) 0:08:56.441 ******* 2026-02-02 00:59:25.381111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 00:59:25.381115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 00:59:25.381119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 00:59:25.381123 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381128 | orchestrator | 2026-02-02 00:59:25.381132 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-02 00:59:25.381145 | orchestrator | Monday 02 February 2026 00:56:32 +0000 (0:00:00.713) 0:08:57.155 ******* 2026-02-02 00:59:25.381149 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381154 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.381158 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.381162 | orchestrator | 2026-02-02 00:59:25.381166 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-02 00:59:25.381170 | orchestrator | Monday 02 February 2026 00:56:33 +0000 (0:00:00.609) 0:08:57.764 ******* 2026-02-02 00:59:25.381175 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381179 | orchestrator | 2026-02-02 00:59:25.381183 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-02 00:59:25.381187 | orchestrator | Monday 02 February 2026 00:56:33 +0000 (0:00:00.263) 0:08:58.028 ******* 2026-02-02 00:59:25.381191 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381196 | orchestrator | 2026-02-02 00:59:25.381200 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-02 00:59:25.381204 | orchestrator | 2026-02-02 00:59:25.381208 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 00:59:25.381213 | orchestrator | Monday 02 February 2026 00:56:34 +0000 (0:00:00.858) 0:08:58.887 ******* 2026-02-02 00:59:25.381217 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.381223 | orchestrator | 2026-02-02 00:59:25.381227 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 00:59:25.381231 | orchestrator | Monday 02 February 2026 00:56:35 +0000 (0:00:01.399) 0:09:00.287 ******* 2026-02-02 00:59:25.381236 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.381240 | orchestrator | 2026-02-02 00:59:25.381244 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 00:59:25.381252 | orchestrator | Monday 02 February 2026 00:56:37 +0000 (0:00:01.370) 0:09:01.657 ******* 2026-02-02 00:59:25.381256 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.381260 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.381264 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381269 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.381276 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.381280 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.381284 | orchestrator | 2026-02-02 00:59:25.381289 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 00:59:25.381293 | orchestrator | Monday 02 February 2026 00:56:38 +0000 (0:00:00.922) 0:09:02.579 ******* 2026-02-02 00:59:25.381297 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.381301 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.381306 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.381310 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.381314 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.381320 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.381326 | orchestrator | 2026-02-02 00:59:25.381333 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 00:59:25.381337 | orchestrator | Monday 02 February 2026 00:56:39 +0000 (0:00:01.085) 0:09:03.665 ******* 2026-02-02 00:59:25.381341 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.381345 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.381350 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.381354 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.381358 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.381362 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.381366 | orchestrator | 2026-02-02 00:59:25.381371 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 00:59:25.381375 | orchestrator | Monday 02 February 2026 00:56:40 +0000 (0:00:01.344) 0:09:05.010 ******* 2026-02-02 00:59:25.381379 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.381383 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.381387 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.381391 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.381396 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.381400 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.381404 | orchestrator | 2026-02-02 00:59:25.381408 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 00:59:25.381414 | orchestrator | Monday 02 February 2026 00:56:41 +0000 (0:00:01.119) 0:09:06.129 ******* 2026-02-02 00:59:25.381421 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381425 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.381429 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.381433 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.381437 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.381441 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.381445 | orchestrator | 2026-02-02 00:59:25.381450 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 00:59:25.381454 | orchestrator | Monday 02 February 2026 00:56:43 +0000 (0:00:01.232) 0:09:07.362 ******* 2026-02-02 00:59:25.381458 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.381462 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.381466 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.381470 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381475 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.381482 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.381486 | orchestrator | 2026-02-02 00:59:25.381490 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 00:59:25.381495 | orchestrator | Monday 02 February 2026 00:56:43 +0000 (0:00:00.675) 0:09:08.037 ******* 2026-02-02 00:59:25.381499 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.381503 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.381513 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.381517 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381521 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.381525 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.381530 | orchestrator | 2026-02-02 00:59:25.381534 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 00:59:25.381538 | orchestrator | Monday 02 February 2026 00:56:44 +0000 (0:00:00.930) 0:09:08.968 ******* 2026-02-02 00:59:25.381542 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.381547 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.381551 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.381556 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.381563 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.381568 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.381572 | orchestrator | 2026-02-02 00:59:25.381576 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 00:59:25.381581 | orchestrator | Monday 02 February 2026 00:56:46 +0000 (0:00:01.372) 0:09:10.340 ******* 2026-02-02 00:59:25.381585 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.381589 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.381593 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.381597 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.381601 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.381605 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.381609 | orchestrator | 2026-02-02 00:59:25.381614 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 00:59:25.381618 | orchestrator | Monday 02 February 2026 00:56:47 +0000 (0:00:01.777) 0:09:12.118 ******* 2026-02-02 00:59:25.381622 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.381626 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.381630 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.381635 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381639 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.381643 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.381647 | orchestrator | 2026-02-02 00:59:25.381652 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 00:59:25.381660 | orchestrator | Monday 02 February 2026 00:56:48 +0000 (0:00:00.658) 0:09:12.776 ******* 2026-02-02 00:59:25.381664 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.381676 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.381680 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.381685 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381689 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.381693 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.381698 | orchestrator | 2026-02-02 00:59:25.381702 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 00:59:25.381707 | orchestrator | Monday 02 February 2026 00:56:49 +0000 (0:00:00.883) 0:09:13.659 ******* 2026-02-02 00:59:25.381711 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.381715 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.381719 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.381726 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.381730 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.381734 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.381738 | orchestrator | 2026-02-02 00:59:25.381743 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 00:59:25.381747 | orchestrator | Monday 02 February 2026 00:56:50 +0000 (0:00:00.653) 0:09:14.313 ******* 2026-02-02 00:59:25.381751 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.381755 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.381760 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.381764 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.381768 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.381772 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.381780 | orchestrator | 2026-02-02 00:59:25.381784 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 00:59:25.381789 | orchestrator | Monday 02 February 2026 00:56:50 +0000 (0:00:00.897) 0:09:15.210 ******* 2026-02-02 00:59:25.381793 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.381798 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.381803 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.381819 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.381824 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.381828 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.381832 | orchestrator | 2026-02-02 00:59:25.381836 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 00:59:25.381840 | orchestrator | Monday 02 February 2026 00:56:51 +0000 (0:00:00.685) 0:09:15.896 ******* 2026-02-02 00:59:25.381844 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.381849 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.381853 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.381857 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381861 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.381865 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.381870 | orchestrator | 2026-02-02 00:59:25.381874 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 00:59:25.381878 | orchestrator | Monday 02 February 2026 00:56:52 +0000 (0:00:00.866) 0:09:16.762 ******* 2026-02-02 00:59:25.381882 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:25.381887 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:25.381891 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:25.381895 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381899 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.381903 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.381907 | orchestrator | 2026-02-02 00:59:25.381912 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 00:59:25.381916 | orchestrator | Monday 02 February 2026 00:56:53 +0000 (0:00:00.602) 0:09:17.365 ******* 2026-02-02 00:59:25.381920 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.381924 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.381928 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.381936 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.381941 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.381945 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.381949 | orchestrator | 2026-02-02 00:59:25.381953 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 00:59:25.381957 | orchestrator | Monday 02 February 2026 00:56:53 +0000 (0:00:00.610) 0:09:17.976 ******* 2026-02-02 00:59:25.381962 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.381966 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.381970 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.381975 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.381979 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.381983 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.381988 | orchestrator | 2026-02-02 00:59:25.381992 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 00:59:25.381996 | orchestrator | Monday 02 February 2026 00:56:54 +0000 (0:00:01.064) 0:09:19.040 ******* 2026-02-02 00:59:25.382001 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.382005 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.382009 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.382013 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.382052 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.382058 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.382064 | orchestrator | 2026-02-02 00:59:25.382081 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-02 00:59:25.382085 | orchestrator | Monday 02 February 2026 00:56:56 +0000 (0:00:01.363) 0:09:20.403 ******* 2026-02-02 00:59:25.382089 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.382101 | orchestrator | 2026-02-02 00:59:25.382105 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-02 00:59:25.382109 | orchestrator | Monday 02 February 2026 00:57:00 +0000 (0:00:04.518) 0:09:24.921 ******* 2026-02-02 00:59:25.382113 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.382117 | orchestrator | 2026-02-02 00:59:25.382122 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-02 00:59:25.382126 | orchestrator | Monday 02 February 2026 00:57:02 +0000 (0:00:02.082) 0:09:27.004 ******* 2026-02-02 00:59:25.382130 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.382135 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.382139 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.382143 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.382147 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.382152 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.382157 | orchestrator | 2026-02-02 00:59:25.382162 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-02 00:59:25.382166 | orchestrator | Monday 02 February 2026 00:57:04 +0000 (0:00:01.884) 0:09:28.888 ******* 2026-02-02 00:59:25.382171 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.382175 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.382179 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.382183 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.382188 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.382192 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.382196 | orchestrator | 2026-02-02 00:59:25.382201 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-02 00:59:25.382205 | orchestrator | Monday 02 February 2026 00:57:05 +0000 (0:00:01.096) 0:09:29.984 ******* 2026-02-02 00:59:25.382212 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.382218 | orchestrator | 2026-02-02 00:59:25.382223 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-02 00:59:25.382227 | orchestrator | Monday 02 February 2026 00:57:06 +0000 (0:00:01.278) 0:09:31.263 ******* 2026-02-02 00:59:25.382232 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.382236 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.382240 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.382245 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.382249 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.382254 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.382259 | orchestrator | 2026-02-02 00:59:25.382263 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-02 00:59:25.382268 | orchestrator | Monday 02 February 2026 00:57:08 +0000 (0:00:01.848) 0:09:33.111 ******* 2026-02-02 00:59:25.382272 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.382276 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.382280 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.382284 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.382289 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.382293 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.382298 | orchestrator | 2026-02-02 00:59:25.382302 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-02 00:59:25.382307 | orchestrator | Monday 02 February 2026 00:57:12 +0000 (0:00:03.642) 0:09:36.754 ******* 2026-02-02 00:59:25.382312 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.382317 | orchestrator | 2026-02-02 00:59:25.382321 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-02 00:59:25.382325 | orchestrator | Monday 02 February 2026 00:57:13 +0000 (0:00:01.392) 0:09:38.146 ******* 2026-02-02 00:59:25.382329 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.382341 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.382346 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.382350 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.382354 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.382359 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.382363 | orchestrator | 2026-02-02 00:59:25.382368 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-02 00:59:25.382372 | orchestrator | Monday 02 February 2026 00:57:14 +0000 (0:00:00.688) 0:09:38.834 ******* 2026-02-02 00:59:25.382377 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:25.382381 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:25.382385 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.382390 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:25.382394 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.382403 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.382407 | orchestrator | 2026-02-02 00:59:25.382412 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-02 00:59:25.382416 | orchestrator | Monday 02 February 2026 00:57:17 +0000 (0:00:02.482) 0:09:41.317 ******* 2026-02-02 00:59:25.382420 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:25.382425 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:25.382429 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:25.382433 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.382438 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.382442 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.382447 | orchestrator | 2026-02-02 00:59:25.382451 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-02 00:59:25.382455 | orchestrator | 2026-02-02 00:59:25.382460 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 00:59:25.382464 | orchestrator | Monday 02 February 2026 00:57:18 +0000 (0:00:01.189) 0:09:42.507 ******* 2026-02-02 00:59:25.382469 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.382473 | orchestrator | 2026-02-02 00:59:25.382478 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 00:59:25.382482 | orchestrator | Monday 02 February 2026 00:57:18 +0000 (0:00:00.546) 0:09:43.053 ******* 2026-02-02 00:59:25.382486 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.382491 | orchestrator | 2026-02-02 00:59:25.382495 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 00:59:25.382499 | orchestrator | Monday 02 February 2026 00:57:19 +0000 (0:00:00.843) 0:09:43.896 ******* 2026-02-02 00:59:25.382504 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.382508 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.382513 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.382517 | orchestrator | 2026-02-02 00:59:25.382521 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 00:59:25.382526 | orchestrator | Monday 02 February 2026 00:57:19 +0000 (0:00:00.339) 0:09:44.235 ******* 2026-02-02 00:59:25.382530 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.382535 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.382539 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.382543 | orchestrator | 2026-02-02 00:59:25.382547 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 00:59:25.382552 | orchestrator | Monday 02 February 2026 00:57:20 +0000 (0:00:00.797) 0:09:45.033 ******* 2026-02-02 00:59:25.382556 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.382561 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.382565 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.382569 | orchestrator | 2026-02-02 00:59:25.382574 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 00:59:25.382578 | orchestrator | Monday 02 February 2026 00:57:21 +0000 (0:00:00.752) 0:09:45.786 ******* 2026-02-02 00:59:25.382850 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.382858 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.382862 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.382866 | orchestrator | 2026-02-02 00:59:25.382871 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 00:59:25.382879 | orchestrator | Monday 02 February 2026 00:57:22 +0000 (0:00:01.117) 0:09:46.904 ******* 2026-02-02 00:59:25.382883 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.382888 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.382892 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.382896 | orchestrator | 2026-02-02 00:59:25.382900 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 00:59:25.382904 | orchestrator | Monday 02 February 2026 00:57:22 +0000 (0:00:00.345) 0:09:47.249 ******* 2026-02-02 00:59:25.382908 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.382913 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.382917 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.382921 | orchestrator | 2026-02-02 00:59:25.382925 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 00:59:25.382929 | orchestrator | Monday 02 February 2026 00:57:23 +0000 (0:00:00.321) 0:09:47.570 ******* 2026-02-02 00:59:25.382933 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.382937 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.382942 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.382946 | orchestrator | 2026-02-02 00:59:25.382950 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 00:59:25.382954 | orchestrator | Monday 02 February 2026 00:57:23 +0000 (0:00:00.315) 0:09:47.885 ******* 2026-02-02 00:59:25.382958 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.382962 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.382966 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.382971 | orchestrator | 2026-02-02 00:59:25.382975 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 00:59:25.382979 | orchestrator | Monday 02 February 2026 00:57:24 +0000 (0:00:01.139) 0:09:49.025 ******* 2026-02-02 00:59:25.382983 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.382987 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.382992 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.382996 | orchestrator | 2026-02-02 00:59:25.383000 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 00:59:25.383004 | orchestrator | Monday 02 February 2026 00:57:25 +0000 (0:00:00.748) 0:09:49.774 ******* 2026-02-02 00:59:25.383008 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.383013 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.383017 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.383021 | orchestrator | 2026-02-02 00:59:25.383025 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 00:59:25.383029 | orchestrator | Monday 02 February 2026 00:57:25 +0000 (0:00:00.361) 0:09:50.135 ******* 2026-02-02 00:59:25.383033 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.383038 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.383042 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.383046 | orchestrator | 2026-02-02 00:59:25.383050 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 00:59:25.383058 | orchestrator | Monday 02 February 2026 00:57:26 +0000 (0:00:00.357) 0:09:50.493 ******* 2026-02-02 00:59:25.383097 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.383102 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.383106 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.383110 | orchestrator | 2026-02-02 00:59:25.383114 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 00:59:25.383119 | orchestrator | Monday 02 February 2026 00:57:26 +0000 (0:00:00.679) 0:09:51.172 ******* 2026-02-02 00:59:25.383123 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.383127 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.383136 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.383146 | orchestrator | 2026-02-02 00:59:25.383151 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 00:59:25.383155 | orchestrator | Monday 02 February 2026 00:57:27 +0000 (0:00:00.343) 0:09:51.516 ******* 2026-02-02 00:59:25.383159 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.383163 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.383167 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.383171 | orchestrator | 2026-02-02 00:59:25.383176 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 00:59:25.383194 | orchestrator | Monday 02 February 2026 00:57:27 +0000 (0:00:00.378) 0:09:51.894 ******* 2026-02-02 00:59:25.383199 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.383204 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.383208 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.383212 | orchestrator | 2026-02-02 00:59:25.383216 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 00:59:25.383221 | orchestrator | Monday 02 February 2026 00:57:27 +0000 (0:00:00.330) 0:09:52.224 ******* 2026-02-02 00:59:25.383225 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.383229 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.383233 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.383237 | orchestrator | 2026-02-02 00:59:25.383241 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 00:59:25.383260 | orchestrator | Monday 02 February 2026 00:57:28 +0000 (0:00:00.603) 0:09:52.828 ******* 2026-02-02 00:59:25.383265 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.383269 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.383273 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.383277 | orchestrator | 2026-02-02 00:59:25.383281 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 00:59:25.383286 | orchestrator | Monday 02 February 2026 00:57:28 +0000 (0:00:00.355) 0:09:53.183 ******* 2026-02-02 00:59:25.383290 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.383294 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.383298 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.383319 | orchestrator | 2026-02-02 00:59:25.383324 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 00:59:25.383328 | orchestrator | Monday 02 February 2026 00:57:29 +0000 (0:00:00.365) 0:09:53.548 ******* 2026-02-02 00:59:25.383332 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.383337 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.383341 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.383345 | orchestrator | 2026-02-02 00:59:25.383349 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-02 00:59:25.383357 | orchestrator | Monday 02 February 2026 00:57:30 +0000 (0:00:00.917) 0:09:54.466 ******* 2026-02-02 00:59:25.383362 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.383367 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.383372 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-02 00:59:25.383377 | orchestrator | 2026-02-02 00:59:25.383382 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-02 00:59:25.383386 | orchestrator | Monday 02 February 2026 00:57:30 +0000 (0:00:00.444) 0:09:54.910 ******* 2026-02-02 00:59:25.383391 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 00:59:25.383411 | orchestrator | 2026-02-02 00:59:25.383416 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-02 00:59:25.383421 | orchestrator | Monday 02 February 2026 00:57:32 +0000 (0:00:02.148) 0:09:57.059 ******* 2026-02-02 00:59:25.383427 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-02 00:59:25.383439 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.383444 | orchestrator | 2026-02-02 00:59:25.383449 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-02 00:59:25.383454 | orchestrator | Monday 02 February 2026 00:57:33 +0000 (0:00:00.252) 0:09:57.311 ******* 2026-02-02 00:59:25.383460 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-02 00:59:25.383470 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-02 00:59:25.383475 | orchestrator | 2026-02-02 00:59:25.383479 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-02 00:59:25.383484 | orchestrator | Monday 02 February 2026 00:57:40 +0000 (0:00:07.820) 0:10:05.132 ******* 2026-02-02 00:59:25.383489 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 00:59:25.383494 | orchestrator | 2026-02-02 00:59:25.383499 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-02 00:59:25.383507 | orchestrator | Monday 02 February 2026 00:57:44 +0000 (0:00:03.606) 0:10:08.738 ******* 2026-02-02 00:59:25.383512 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.383517 | orchestrator | 2026-02-02 00:59:25.383522 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-02 00:59:25.383527 | orchestrator | Monday 02 February 2026 00:57:45 +0000 (0:00:00.882) 0:10:09.620 ******* 2026-02-02 00:59:25.383540 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-02 00:59:25.383545 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-02 00:59:25.383550 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-02 00:59:25.383555 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-02 00:59:25.383560 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-02 00:59:25.383565 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-02 00:59:25.383570 | orchestrator | 2026-02-02 00:59:25.383575 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-02 00:59:25.383579 | orchestrator | Monday 02 February 2026 00:57:46 +0000 (0:00:01.217) 0:10:10.837 ******* 2026-02-02 00:59:25.383584 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 00:59:25.383589 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 00:59:25.383608 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 00:59:25.383613 | orchestrator | 2026-02-02 00:59:25.383618 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-02 00:59:25.383623 | orchestrator | Monday 02 February 2026 00:57:48 +0000 (0:00:02.167) 0:10:13.004 ******* 2026-02-02 00:59:25.383628 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-02 00:59:25.383632 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 00:59:25.383637 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.383642 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-02 00:59:25.383647 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-02 00:59:25.383652 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.383662 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-02 00:59:25.383666 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-02 00:59:25.383671 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.383676 | orchestrator | 2026-02-02 00:59:25.383686 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-02 00:59:25.383691 | orchestrator | Monday 02 February 2026 00:57:49 +0000 (0:00:01.243) 0:10:14.248 ******* 2026-02-02 00:59:25.383708 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.383726 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.383731 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.383735 | orchestrator | 2026-02-02 00:59:25.383740 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-02 00:59:25.383747 | orchestrator | Monday 02 February 2026 00:57:53 +0000 (0:00:03.145) 0:10:17.393 ******* 2026-02-02 00:59:25.383751 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.383755 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.383759 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.383764 | orchestrator | 2026-02-02 00:59:25.383768 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-02 00:59:25.383772 | orchestrator | Monday 02 February 2026 00:57:53 +0000 (0:00:00.404) 0:10:17.798 ******* 2026-02-02 00:59:25.383776 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.383780 | orchestrator | 2026-02-02 00:59:25.383785 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-02 00:59:25.383789 | orchestrator | Monday 02 February 2026 00:57:54 +0000 (0:00:00.623) 0:10:18.422 ******* 2026-02-02 00:59:25.383793 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.383797 | orchestrator | 2026-02-02 00:59:25.383801 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-02 00:59:25.383806 | orchestrator | Monday 02 February 2026 00:57:55 +0000 (0:00:00.954) 0:10:19.376 ******* 2026-02-02 00:59:25.383810 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.383814 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.383818 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.383822 | orchestrator | 2026-02-02 00:59:25.383826 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-02 00:59:25.383831 | orchestrator | Monday 02 February 2026 00:57:56 +0000 (0:00:01.601) 0:10:20.978 ******* 2026-02-02 00:59:25.383835 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.383839 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.383843 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.383847 | orchestrator | 2026-02-02 00:59:25.383851 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-02 00:59:25.383856 | orchestrator | Monday 02 February 2026 00:57:57 +0000 (0:00:01.278) 0:10:22.256 ******* 2026-02-02 00:59:25.383860 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.383864 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.383868 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.383872 | orchestrator | 2026-02-02 00:59:25.383877 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-02 00:59:25.383881 | orchestrator | Monday 02 February 2026 00:58:00 +0000 (0:00:02.224) 0:10:24.481 ******* 2026-02-02 00:59:25.383885 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.383889 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.383893 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.383898 | orchestrator | 2026-02-02 00:59:25.383902 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-02 00:59:25.383909 | orchestrator | Monday 02 February 2026 00:58:02 +0000 (0:00:02.048) 0:10:26.530 ******* 2026-02-02 00:59:25.383914 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.383918 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.383922 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.383926 | orchestrator | 2026-02-02 00:59:25.383931 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 00:59:25.383935 | orchestrator | Monday 02 February 2026 00:58:03 +0000 (0:00:01.309) 0:10:27.840 ******* 2026-02-02 00:59:25.383943 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.383963 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.383967 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.383971 | orchestrator | 2026-02-02 00:59:25.383976 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-02 00:59:25.383980 | orchestrator | Monday 02 February 2026 00:58:04 +0000 (0:00:00.708) 0:10:28.548 ******* 2026-02-02 00:59:25.383985 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.383989 | orchestrator | 2026-02-02 00:59:25.383993 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-02 00:59:25.383997 | orchestrator | Monday 02 February 2026 00:58:04 +0000 (0:00:00.631) 0:10:29.179 ******* 2026-02-02 00:59:25.384001 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.384006 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.384010 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.384014 | orchestrator | 2026-02-02 00:59:25.384018 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-02 00:59:25.384022 | orchestrator | Monday 02 February 2026 00:58:05 +0000 (0:00:00.618) 0:10:29.798 ******* 2026-02-02 00:59:25.384027 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.384031 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.384035 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.384039 | orchestrator | 2026-02-02 00:59:25.384043 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-02 00:59:25.384048 | orchestrator | Monday 02 February 2026 00:58:06 +0000 (0:00:01.239) 0:10:31.037 ******* 2026-02-02 00:59:25.384052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 00:59:25.384056 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 00:59:25.384060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 00:59:25.384064 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.384118 | orchestrator | 2026-02-02 00:59:25.384123 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-02 00:59:25.384127 | orchestrator | Monday 02 February 2026 00:58:07 +0000 (0:00:00.618) 0:10:31.655 ******* 2026-02-02 00:59:25.384131 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.384136 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.384140 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.384144 | orchestrator | 2026-02-02 00:59:25.384148 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-02 00:59:25.384153 | orchestrator | 2026-02-02 00:59:25.384157 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 00:59:25.384161 | orchestrator | Monday 02 February 2026 00:58:08 +0000 (0:00:00.710) 0:10:32.366 ******* 2026-02-02 00:59:25.384168 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.384220 | orchestrator | 2026-02-02 00:59:25.384225 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 00:59:25.384229 | orchestrator | Monday 02 February 2026 00:58:08 +0000 (0:00:00.571) 0:10:32.937 ******* 2026-02-02 00:59:25.384233 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.384238 | orchestrator | 2026-02-02 00:59:25.384242 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 00:59:25.384246 | orchestrator | Monday 02 February 2026 00:58:09 +0000 (0:00:00.876) 0:10:33.814 ******* 2026-02-02 00:59:25.384250 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.384254 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.384259 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.384263 | orchestrator | 2026-02-02 00:59:25.384267 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 00:59:25.384276 | orchestrator | Monday 02 February 2026 00:58:09 +0000 (0:00:00.351) 0:10:34.165 ******* 2026-02-02 00:59:25.384280 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.384284 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.384289 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.384293 | orchestrator | 2026-02-02 00:59:25.384312 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 00:59:25.384316 | orchestrator | Monday 02 February 2026 00:58:10 +0000 (0:00:00.761) 0:10:34.927 ******* 2026-02-02 00:59:25.384320 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.384325 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.384329 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.384333 | orchestrator | 2026-02-02 00:59:25.384337 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 00:59:25.384341 | orchestrator | Monday 02 February 2026 00:58:11 +0000 (0:00:00.837) 0:10:35.765 ******* 2026-02-02 00:59:25.384345 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.384350 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.384354 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.384358 | orchestrator | 2026-02-02 00:59:25.384363 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 00:59:25.384367 | orchestrator | Monday 02 February 2026 00:58:12 +0000 (0:00:01.027) 0:10:36.792 ******* 2026-02-02 00:59:25.384371 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.384375 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.384379 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.384383 | orchestrator | 2026-02-02 00:59:25.384388 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 00:59:25.384392 | orchestrator | Monday 02 February 2026 00:58:12 +0000 (0:00:00.365) 0:10:37.158 ******* 2026-02-02 00:59:25.384396 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.384405 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.384410 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.384414 | orchestrator | 2026-02-02 00:59:25.384418 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 00:59:25.384422 | orchestrator | Monday 02 February 2026 00:58:13 +0000 (0:00:00.349) 0:10:37.507 ******* 2026-02-02 00:59:25.384427 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.384431 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.384435 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.384440 | orchestrator | 2026-02-02 00:59:25.384444 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 00:59:25.384448 | orchestrator | Monday 02 February 2026 00:58:13 +0000 (0:00:00.337) 0:10:37.845 ******* 2026-02-02 00:59:25.384453 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.384457 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.384462 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.384466 | orchestrator | 2026-02-02 00:59:25.384470 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 00:59:25.384474 | orchestrator | Monday 02 February 2026 00:58:14 +0000 (0:00:00.990) 0:10:38.835 ******* 2026-02-02 00:59:25.384478 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.384482 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.384486 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.384491 | orchestrator | 2026-02-02 00:59:25.384495 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 00:59:25.384499 | orchestrator | Monday 02 February 2026 00:58:15 +0000 (0:00:00.837) 0:10:39.673 ******* 2026-02-02 00:59:25.384503 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.384507 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.384512 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.384516 | orchestrator | 2026-02-02 00:59:25.384520 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 00:59:25.384524 | orchestrator | Monday 02 February 2026 00:58:15 +0000 (0:00:00.424) 0:10:40.097 ******* 2026-02-02 00:59:25.384533 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.384538 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.384542 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.384547 | orchestrator | 2026-02-02 00:59:25.384551 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 00:59:25.384555 | orchestrator | Monday 02 February 2026 00:58:16 +0000 (0:00:00.432) 0:10:40.530 ******* 2026-02-02 00:59:25.384560 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.384564 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.384568 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.384573 | orchestrator | 2026-02-02 00:59:25.384577 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 00:59:25.384582 | orchestrator | Monday 02 February 2026 00:58:16 +0000 (0:00:00.768) 0:10:41.299 ******* 2026-02-02 00:59:25.384586 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.384590 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.384595 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.384599 | orchestrator | 2026-02-02 00:59:25.384603 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 00:59:25.384608 | orchestrator | Monday 02 February 2026 00:58:17 +0000 (0:00:00.377) 0:10:41.676 ******* 2026-02-02 00:59:25.384612 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.384616 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.384623 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.384628 | orchestrator | 2026-02-02 00:59:25.384633 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 00:59:25.384637 | orchestrator | Monday 02 February 2026 00:58:17 +0000 (0:00:00.414) 0:10:42.091 ******* 2026-02-02 00:59:25.384641 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.384646 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.384650 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.384654 | orchestrator | 2026-02-02 00:59:25.384658 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 00:59:25.384663 | orchestrator | Monday 02 February 2026 00:58:18 +0000 (0:00:00.377) 0:10:42.469 ******* 2026-02-02 00:59:25.384667 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.384672 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.384676 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.384681 | orchestrator | 2026-02-02 00:59:25.384685 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 00:59:25.384690 | orchestrator | Monday 02 February 2026 00:58:19 +0000 (0:00:00.866) 0:10:43.335 ******* 2026-02-02 00:59:25.384694 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.384698 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.384702 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.384706 | orchestrator | 2026-02-02 00:59:25.384711 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 00:59:25.384715 | orchestrator | Monday 02 February 2026 00:58:19 +0000 (0:00:00.402) 0:10:43.738 ******* 2026-02-02 00:59:25.384719 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.384723 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.384727 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.384731 | orchestrator | 2026-02-02 00:59:25.384736 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 00:59:25.384740 | orchestrator | Monday 02 February 2026 00:58:19 +0000 (0:00:00.379) 0:10:44.118 ******* 2026-02-02 00:59:25.384744 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.384748 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.384770 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.384775 | orchestrator | 2026-02-02 00:59:25.384779 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-02 00:59:25.384783 | orchestrator | Monday 02 February 2026 00:58:20 +0000 (0:00:00.590) 0:10:44.709 ******* 2026-02-02 00:59:25.384788 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.384796 | orchestrator | 2026-02-02 00:59:25.384800 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-02 00:59:25.384804 | orchestrator | Monday 02 February 2026 00:58:21 +0000 (0:00:00.910) 0:10:45.620 ******* 2026-02-02 00:59:25.384808 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 00:59:25.384816 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 00:59:25.384820 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 00:59:25.384845 | orchestrator | 2026-02-02 00:59:25.384849 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-02 00:59:25.384854 | orchestrator | Monday 02 February 2026 00:58:23 +0000 (0:00:02.326) 0:10:47.946 ******* 2026-02-02 00:59:25.384858 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-02 00:59:25.384863 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 00:59:25.384867 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.384871 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-02 00:59:25.384875 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-02 00:59:25.384879 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.384884 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-02 00:59:25.384888 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-02 00:59:25.384892 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.384896 | orchestrator | 2026-02-02 00:59:25.384900 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-02 00:59:25.384904 | orchestrator | Monday 02 February 2026 00:58:24 +0000 (0:00:01.110) 0:10:49.057 ******* 2026-02-02 00:59:25.384909 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.384913 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.384917 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.384921 | orchestrator | 2026-02-02 00:59:25.384925 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-02 00:59:25.384930 | orchestrator | Monday 02 February 2026 00:58:25 +0000 (0:00:00.724) 0:10:49.782 ******* 2026-02-02 00:59:25.384934 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.384938 | orchestrator | 2026-02-02 00:59:25.384942 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-02 00:59:25.384947 | orchestrator | Monday 02 February 2026 00:58:26 +0000 (0:00:00.610) 0:10:50.392 ******* 2026-02-02 00:59:25.384951 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.384956 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.384961 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.384965 | orchestrator | 2026-02-02 00:59:25.384969 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-02 00:59:25.384973 | orchestrator | Monday 02 February 2026 00:58:26 +0000 (0:00:00.832) 0:10:51.224 ******* 2026-02-02 00:59:25.384977 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 00:59:25.384985 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-02 00:59:25.384989 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 00:59:25.384993 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-02 00:59:25.384998 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 00:59:25.385008 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-02 00:59:25.385012 | orchestrator | 2026-02-02 00:59:25.385016 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-02 00:59:25.385020 | orchestrator | Monday 02 February 2026 00:58:31 +0000 (0:00:05.054) 0:10:56.279 ******* 2026-02-02 00:59:25.385025 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 00:59:25.385029 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 00:59:25.385033 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 00:59:25.385037 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 00:59:25.385041 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 00:59:25.385045 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 00:59:25.385049 | orchestrator | 2026-02-02 00:59:25.385054 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-02 00:59:25.385058 | orchestrator | Monday 02 February 2026 00:58:34 +0000 (0:00:02.321) 0:10:58.601 ******* 2026-02-02 00:59:25.385062 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-02 00:59:25.385066 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.385083 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-02 00:59:25.385088 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.385092 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-02 00:59:25.385096 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.385100 | orchestrator | 2026-02-02 00:59:25.385105 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-02 00:59:25.385109 | orchestrator | Monday 02 February 2026 00:58:35 +0000 (0:00:01.226) 0:10:59.828 ******* 2026-02-02 00:59:25.385113 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-02 00:59:25.385117 | orchestrator | 2026-02-02 00:59:25.385125 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-02 00:59:25.385129 | orchestrator | Monday 02 February 2026 00:58:35 +0000 (0:00:00.241) 0:11:00.069 ******* 2026-02-02 00:59:25.385133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 00:59:25.385138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 00:59:25.385143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 00:59:25.385147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 00:59:25.385151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 00:59:25.385155 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.385160 | orchestrator | 2026-02-02 00:59:25.385164 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-02 00:59:25.385168 | orchestrator | Monday 02 February 2026 00:58:36 +0000 (0:00:00.934) 0:11:01.003 ******* 2026-02-02 00:59:25.385173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 00:59:25.385177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 00:59:25.385181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 00:59:25.385186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 00:59:25.385195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 00:59:25.385200 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.385204 | orchestrator | 2026-02-02 00:59:25.385208 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-02 00:59:25.385212 | orchestrator | Monday 02 February 2026 00:58:37 +0000 (0:00:01.114) 0:11:02.118 ******* 2026-02-02 00:59:25.385217 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 00:59:25.385224 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 00:59:25.385228 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 00:59:25.385233 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 00:59:25.385237 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 00:59:25.385241 | orchestrator | 2026-02-02 00:59:25.385246 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-02 00:59:25.385250 | orchestrator | Monday 02 February 2026 00:59:09 +0000 (0:00:31.330) 0:11:33.448 ******* 2026-02-02 00:59:25.385254 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.385258 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.385263 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.385267 | orchestrator | 2026-02-02 00:59:25.385271 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-02 00:59:25.385275 | orchestrator | Monday 02 February 2026 00:59:09 +0000 (0:00:00.663) 0:11:34.111 ******* 2026-02-02 00:59:25.385280 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.385284 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.385288 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.385292 | orchestrator | 2026-02-02 00:59:25.385296 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-02 00:59:25.385301 | orchestrator | Monday 02 February 2026 00:59:10 +0000 (0:00:00.343) 0:11:34.455 ******* 2026-02-02 00:59:25.385305 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.385309 | orchestrator | 2026-02-02 00:59:25.385314 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-02 00:59:25.385318 | orchestrator | Monday 02 February 2026 00:59:10 +0000 (0:00:00.584) 0:11:35.040 ******* 2026-02-02 00:59:25.385339 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.385344 | orchestrator | 2026-02-02 00:59:25.385348 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-02 00:59:25.385353 | orchestrator | Monday 02 February 2026 00:59:11 +0000 (0:00:00.878) 0:11:35.918 ******* 2026-02-02 00:59:25.385357 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.385365 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.385369 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.385373 | orchestrator | 2026-02-02 00:59:25.385377 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-02 00:59:25.385382 | orchestrator | Monday 02 February 2026 00:59:12 +0000 (0:00:01.326) 0:11:37.244 ******* 2026-02-02 00:59:25.385517 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.385524 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.385534 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.385538 | orchestrator | 2026-02-02 00:59:25.385543 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-02 00:59:25.385548 | orchestrator | Monday 02 February 2026 00:59:14 +0000 (0:00:01.373) 0:11:38.618 ******* 2026-02-02 00:59:25.385552 | orchestrator | changed: [testbed-node-3] 2026-02-02 00:59:25.385556 | orchestrator | changed: [testbed-node-5] 2026-02-02 00:59:25.385561 | orchestrator | changed: [testbed-node-4] 2026-02-02 00:59:25.385565 | orchestrator | 2026-02-02 00:59:25.385569 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-02 00:59:25.385573 | orchestrator | Monday 02 February 2026 00:59:16 +0000 (0:00:02.604) 0:11:41.222 ******* 2026-02-02 00:59:25.385578 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.385582 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.385587 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 00:59:25.385591 | orchestrator | 2026-02-02 00:59:25.385603 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 00:59:25.385608 | orchestrator | Monday 02 February 2026 00:59:19 +0000 (0:00:02.379) 0:11:43.601 ******* 2026-02-02 00:59:25.385612 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.385617 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.385621 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.385625 | orchestrator | 2026-02-02 00:59:25.385629 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-02 00:59:25.385634 | orchestrator | Monday 02 February 2026 00:59:19 +0000 (0:00:00.678) 0:11:44.279 ******* 2026-02-02 00:59:25.385638 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 00:59:25.385642 | orchestrator | 2026-02-02 00:59:25.385647 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-02 00:59:25.385651 | orchestrator | Monday 02 February 2026 00:59:20 +0000 (0:00:00.640) 0:11:44.920 ******* 2026-02-02 00:59:25.385655 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.385660 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.385664 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.385668 | orchestrator | 2026-02-02 00:59:25.385673 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-02 00:59:25.385677 | orchestrator | Monday 02 February 2026 00:59:20 +0000 (0:00:00.329) 0:11:45.250 ******* 2026-02-02 00:59:25.385685 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.385689 | orchestrator | skipping: [testbed-node-4] 2026-02-02 00:59:25.385694 | orchestrator | skipping: [testbed-node-5] 2026-02-02 00:59:25.385698 | orchestrator | 2026-02-02 00:59:25.385703 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-02 00:59:25.385708 | orchestrator | Monday 02 February 2026 00:59:21 +0000 (0:00:00.658) 0:11:45.909 ******* 2026-02-02 00:59:25.385712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 00:59:25.385717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 00:59:25.385721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 00:59:25.385725 | orchestrator | skipping: [testbed-node-3] 2026-02-02 00:59:25.385730 | orchestrator | 2026-02-02 00:59:25.385735 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-02 00:59:25.385740 | orchestrator | Monday 02 February 2026 00:59:22 +0000 (0:00:00.700) 0:11:46.609 ******* 2026-02-02 00:59:25.385744 | orchestrator | ok: [testbed-node-3] 2026-02-02 00:59:25.385759 | orchestrator | ok: [testbed-node-4] 2026-02-02 00:59:25.385764 | orchestrator | ok: [testbed-node-5] 2026-02-02 00:59:25.385768 | orchestrator | 2026-02-02 00:59:25.385773 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:59:25.385782 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2026-02-02 00:59:25.385787 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-02 00:59:25.385792 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-02 00:59:25.385797 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2026-02-02 00:59:25.385801 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-02 00:59:25.385806 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-02 00:59:25.385810 | orchestrator | 2026-02-02 00:59:25.385815 | orchestrator | 2026-02-02 00:59:25.385819 | orchestrator | 2026-02-02 00:59:25.385824 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:59:25.385832 | orchestrator | Monday 02 February 2026 00:59:22 +0000 (0:00:00.268) 0:11:46.878 ******* 2026-02-02 00:59:25.385836 | orchestrator | =============================================================================== 2026-02-02 00:59:25.385841 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 64.47s 2026-02-02 00:59:25.385846 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 36.04s 2026-02-02 00:59:25.385850 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.33s 2026-02-02 00:59:25.385854 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.23s 2026-02-02 00:59:25.385859 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.94s 2026-02-02 00:59:25.385864 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.01s 2026-02-02 00:59:25.385868 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.56s 2026-02-02 00:59:25.385872 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.36s 2026-02-02 00:59:25.385877 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.92s 2026-02-02 00:59:25.385881 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.82s 2026-02-02 00:59:25.385886 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 7.16s 2026-02-02 00:59:25.385891 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.11s 2026-02-02 00:59:25.385907 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.10s 2026-02-02 00:59:25.385912 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.05s 2026-02-02 00:59:25.385917 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 4.67s 2026-02-02 00:59:25.385921 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.52s 2026-02-02 00:59:25.385926 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.14s 2026-02-02 00:59:25.385931 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.10s 2026-02-02 00:59:25.385936 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.64s 2026-02-02 00:59:25.385941 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.61s 2026-02-02 00:59:25.385945 | orchestrator | 2026-02-02 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:28.410414 | orchestrator | 2026-02-02 00:59:28 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:28.411984 | orchestrator | 2026-02-02 00:59:28 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:28.413408 | orchestrator | 2026-02-02 00:59:28 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 00:59:28.413485 | orchestrator | 2026-02-02 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:31.452941 | orchestrator | 2026-02-02 00:59:31 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:31.454739 | orchestrator | 2026-02-02 00:59:31 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:31.456233 | orchestrator | 2026-02-02 00:59:31 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 00:59:31.456268 | orchestrator | 2026-02-02 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:34.491322 | orchestrator | 2026-02-02 00:59:34 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:34.492163 | orchestrator | 2026-02-02 00:59:34 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:34.494589 | orchestrator | 2026-02-02 00:59:34 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 00:59:34.494955 | orchestrator | 2026-02-02 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:37.539320 | orchestrator | 2026-02-02 00:59:37 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:37.543589 | orchestrator | 2026-02-02 00:59:37 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:37.545659 | orchestrator | 2026-02-02 00:59:37 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 00:59:37.546149 | orchestrator | 2026-02-02 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:40.594413 | orchestrator | 2026-02-02 00:59:40 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:40.597699 | orchestrator | 2026-02-02 00:59:40 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:40.600164 | orchestrator | 2026-02-02 00:59:40 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 00:59:40.600228 | orchestrator | 2026-02-02 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:43.658812 | orchestrator | 2026-02-02 00:59:43 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:43.662560 | orchestrator | 2026-02-02 00:59:43 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:43.662642 | orchestrator | 2026-02-02 00:59:43 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 00:59:43.662658 | orchestrator | 2026-02-02 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:46.704324 | orchestrator | 2026-02-02 00:59:46 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:46.707762 | orchestrator | 2026-02-02 00:59:46 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:46.714930 | orchestrator | 2026-02-02 00:59:46 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 00:59:46.715023 | orchestrator | 2026-02-02 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:49.758471 | orchestrator | 2026-02-02 00:59:49 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:49.760265 | orchestrator | 2026-02-02 00:59:49 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:49.762435 | orchestrator | 2026-02-02 00:59:49 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 00:59:49.762484 | orchestrator | 2026-02-02 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:52.798319 | orchestrator | 2026-02-02 00:59:52 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:52.799681 | orchestrator | 2026-02-02 00:59:52 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:52.801636 | orchestrator | 2026-02-02 00:59:52 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 00:59:52.801667 | orchestrator | 2026-02-02 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:55.860700 | orchestrator | 2026-02-02 00:59:55 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state STARTED 2026-02-02 00:59:55.862643 | orchestrator | 2026-02-02 00:59:55 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:55.864405 | orchestrator | 2026-02-02 00:59:55 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 00:59:55.864505 | orchestrator | 2026-02-02 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-02-02 00:59:58.912783 | orchestrator | 2026-02-02 00:59:58.912932 | orchestrator | 2026-02-02 00:59:58.913024 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 00:59:58.913067 | orchestrator | 2026-02-02 00:59:58.913079 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 00:59:58.913091 | orchestrator | Monday 02 February 2026 00:57:29 +0000 (0:00:00.368) 0:00:00.368 ******* 2026-02-02 00:59:58.913102 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:58.913282 | orchestrator | ok: [testbed-node-1] 2026-02-02 00:59:58.913295 | orchestrator | ok: [testbed-node-2] 2026-02-02 00:59:58.913306 | orchestrator | 2026-02-02 00:59:58.913317 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 00:59:58.913329 | orchestrator | Monday 02 February 2026 00:57:29 +0000 (0:00:00.322) 0:00:00.691 ******* 2026-02-02 00:59:58.913341 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-02 00:59:58.913354 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-02 00:59:58.913365 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-02 00:59:58.913376 | orchestrator | 2026-02-02 00:59:58.913388 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-02 00:59:58.913399 | orchestrator | 2026-02-02 00:59:58.913410 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-02 00:59:58.913421 | orchestrator | Monday 02 February 2026 00:57:30 +0000 (0:00:00.488) 0:00:01.180 ******* 2026-02-02 00:59:58.913433 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:58.913444 | orchestrator | 2026-02-02 00:59:58.913455 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-02 00:59:58.913466 | orchestrator | Monday 02 February 2026 00:57:30 +0000 (0:00:00.524) 0:00:01.705 ******* 2026-02-02 00:59:58.913477 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 00:59:58.913488 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 00:59:58.913499 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 00:59:58.913510 | orchestrator | 2026-02-02 00:59:58.913522 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-02 00:59:58.913533 | orchestrator | Monday 02 February 2026 00:57:31 +0000 (0:00:00.741) 0:00:02.446 ******* 2026-02-02 00:59:58.913547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:59:58.913678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:59:58.913728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:59:58.913746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:59:58.913762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:59:58.913785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:59:58.913798 | orchestrator | 2026-02-02 00:59:58.913810 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-02 00:59:58.913821 | orchestrator | Monday 02 February 2026 00:57:33 +0000 (0:00:01.851) 0:00:04.298 ******* 2026-02-02 00:59:58.913837 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:58.913849 | orchestrator | 2026-02-02 00:59:58.913860 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-02 00:59:58.913880 | orchestrator | Monday 02 February 2026 00:57:33 +0000 (0:00:00.534) 0:00:04.832 ******* 2026-02-02 00:59:58.913893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:59:58.913905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:59:58.913924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:59:58.913937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:59:58.913964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:59:58.913978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:59:58.913997 | orchestrator | 2026-02-02 00:59:58.914008 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-02 00:59:58.914102 | orchestrator | Monday 02 February 2026 00:57:36 +0000 (0:00:02.622) 0:00:07.454 ******* 2026-02-02 00:59:58.914115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:59:58.914141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 00:59:58.914154 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:58.914167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:59:58.914187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:59:58.914200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 00:59:58.914212 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:58.914237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 00:59:58.914250 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:58.914262 | orchestrator | 2026-02-02 00:59:58.914273 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-02 00:59:58.914285 | orchestrator | Monday 02 February 2026 00:57:37 +0000 (0:00:01.422) 0:00:08.877 ******* 2026-02-02 00:59:58.914296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:59:58.914317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 00:59:58.914331 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:58.914345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:59:58.914372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 00:59:58.914389 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:58.914417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:59:58.914431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 00:59:58.914446 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:58.914459 | orchestrator | 2026-02-02 00:59:58.914471 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-02 00:59:58.914485 | orchestrator | Monday 02 February 2026 00:57:39 +0000 (0:00:01.317) 0:00:10.194 ******* 2026-02-02 00:59:58.914498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:59:58.914525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:59:58.914547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:59:58.914562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:59:58.914577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:59:58.914602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localt2026-02-02 00:59:58 | INFO  | Task e8d9e7ec-2660-4496-bd59-1f79003f338b is in state SUCCESS 2026-02-02 00:59:58.914618 | orchestrator | ime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:59:58.914639 | orchestrator | 2026-02-02 00:59:58.914652 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-02 00:59:58.914664 | orchestrator | Monday 02 February 2026 00:57:41 +0000 (0:00:02.454) 0:00:12.648 ******* 2026-02-02 00:59:58.914675 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:58.914686 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:58.914697 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:58.914708 | orchestrator | 2026-02-02 00:59:58.914719 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-02 00:59:58.914730 | orchestrator | Monday 02 February 2026 00:57:44 +0000 (0:00:02.801) 0:00:15.450 ******* 2026-02-02 00:59:58.914741 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:58.914753 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:58.914763 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:58.914775 | orchestrator | 2026-02-02 00:59:58.914786 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-02 00:59:58.914797 | orchestrator | Monday 02 February 2026 00:57:47 +0000 (0:00:02.815) 0:00:18.265 ******* 2026-02-02 00:59:58.914809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:59:58.914821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:59:58.914844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 00:59:58.914863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:59:58.914877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:59:58.914890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 00:59:58.914902 | orchestrator | 2026-02-02 00:59:58.914921 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-02 00:59:58.914932 | orchestrator | Monday 02 February 2026 00:57:49 +0000 (0:00:02.213) 0:00:20.479 ******* 2026-02-02 00:59:58.914943 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 00:59:58.914955 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:59:58.914966 | orchestrator | } 2026-02-02 00:59:58.914977 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 00:59:58.914994 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:59:58.915005 | orchestrator | } 2026-02-02 00:59:58.915016 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 00:59:58.915090 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 00:59:58.915121 | orchestrator | } 2026-02-02 00:59:58.915141 | orchestrator | 2026-02-02 00:59:58.915153 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 00:59:58.915164 | orchestrator | Monday 02 February 2026 00:57:49 +0000 (0:00:00.417) 0:00:20.896 ******* 2026-02-02 00:59:58.915176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:59:58.915189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 00:59:58.915201 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:58.915213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:59:58.915248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 00:59:58.915261 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:58.915273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 00:59:58.915285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 00:59:58.915298 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:58.915309 | orchestrator | 2026-02-02 00:59:58.915320 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-02 00:59:58.915331 | orchestrator | Monday 02 February 2026 00:57:51 +0000 (0:00:01.608) 0:00:22.505 ******* 2026-02-02 00:59:58.915342 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:58.915353 | orchestrator | skipping: [testbed-node-1] 2026-02-02 00:59:58.915364 | orchestrator | skipping: [testbed-node-2] 2026-02-02 00:59:58.915375 | orchestrator | 2026-02-02 00:59:58.915386 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-02 00:59:58.915397 | orchestrator | Monday 02 February 2026 00:57:51 +0000 (0:00:00.350) 0:00:22.855 ******* 2026-02-02 00:59:58.915415 | orchestrator | 2026-02-02 00:59:58.915426 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-02 00:59:58.915437 | orchestrator | Monday 02 February 2026 00:57:51 +0000 (0:00:00.068) 0:00:22.923 ******* 2026-02-02 00:59:58.915448 | orchestrator | 2026-02-02 00:59:58.915458 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-02 00:59:58.915470 | orchestrator | Monday 02 February 2026 00:57:51 +0000 (0:00:00.069) 0:00:22.992 ******* 2026-02-02 00:59:58.915480 | orchestrator | 2026-02-02 00:59:58.915491 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-02 00:59:58.915502 | orchestrator | Monday 02 February 2026 00:57:52 +0000 (0:00:00.069) 0:00:23.061 ******* 2026-02-02 00:59:58.915513 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:58.915524 | orchestrator | 2026-02-02 00:59:58.915535 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-02 00:59:58.915546 | orchestrator | Monday 02 February 2026 00:57:52 +0000 (0:00:00.217) 0:00:23.279 ******* 2026-02-02 00:59:58.915557 | orchestrator | skipping: [testbed-node-0] 2026-02-02 00:59:58.915568 | orchestrator | 2026-02-02 00:59:58.915579 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-02 00:59:58.915590 | orchestrator | Monday 02 February 2026 00:57:52 +0000 (0:00:00.215) 0:00:23.494 ******* 2026-02-02 00:59:58.915601 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:58.915612 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:58.915623 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:58.915633 | orchestrator | 2026-02-02 00:59:58.915644 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-02 00:59:58.915655 | orchestrator | Monday 02 February 2026 00:58:46 +0000 (0:00:54.120) 0:01:17.615 ******* 2026-02-02 00:59:58.915666 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:58.915682 | orchestrator | changed: [testbed-node-1] 2026-02-02 00:59:58.915693 | orchestrator | changed: [testbed-node-2] 2026-02-02 00:59:58.915704 | orchestrator | 2026-02-02 00:59:58.915722 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-02 00:59:58.915875 | orchestrator | Monday 02 February 2026 00:59:43 +0000 (0:00:57.385) 0:02:15.000 ******* 2026-02-02 00:59:58.915889 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 00:59:58.915901 | orchestrator | 2026-02-02 00:59:58.915912 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-02 00:59:58.915929 | orchestrator | Monday 02 February 2026 00:59:44 +0000 (0:00:00.598) 0:02:15.599 ******* 2026-02-02 00:59:58.915944 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:58.915956 | orchestrator | 2026-02-02 00:59:58.915967 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-02-02 00:59:58.915978 | orchestrator | Monday 02 February 2026 00:59:47 +0000 (0:00:02.548) 0:02:18.148 ******* 2026-02-02 00:59:58.915989 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:58.916000 | orchestrator | 2026-02-02 00:59:58.916011 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-02 00:59:58.916022 | orchestrator | Monday 02 February 2026 00:59:49 +0000 (0:00:02.187) 0:02:20.336 ******* 2026-02-02 00:59:58.916105 | orchestrator | ok: [testbed-node-0] 2026-02-02 00:59:58.916118 | orchestrator | 2026-02-02 00:59:58.916129 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-02 00:59:58.916254 | orchestrator | Monday 02 February 2026 00:59:51 +0000 (0:00:02.694) 0:02:23.031 ******* 2026-02-02 00:59:58.916268 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:58.916280 | orchestrator | 2026-02-02 00:59:58.916291 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-02 00:59:58.916302 | orchestrator | Monday 02 February 2026 00:59:54 +0000 (0:00:02.983) 0:02:26.014 ******* 2026-02-02 00:59:58.916313 | orchestrator | changed: [testbed-node-0] 2026-02-02 00:59:58.916324 | orchestrator | 2026-02-02 00:59:58.916346 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 00:59:58.916358 | orchestrator | testbed-node-0 : ok=20  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 00:59:58.916371 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 00:59:58.916382 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 00:59:58.916393 | orchestrator | 2026-02-02 00:59:58.916404 | orchestrator | 2026-02-02 00:59:58.916416 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 00:59:58.916427 | orchestrator | Monday 02 February 2026 00:59:57 +0000 (0:00:02.473) 0:02:28.487 ******* 2026-02-02 00:59:58.916438 | orchestrator | =============================================================================== 2026-02-02 00:59:58.916449 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 57.39s 2026-02-02 00:59:58.916460 | orchestrator | opensearch : Restart opensearch container ------------------------------ 54.12s 2026-02-02 00:59:58.916476 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.98s 2026-02-02 00:59:58.916492 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.82s 2026-02-02 00:59:58.916503 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.80s 2026-02-02 00:59:58.916514 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.69s 2026-02-02 00:59:58.916525 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.62s 2026-02-02 00:59:58.916536 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.55s 2026-02-02 00:59:58.916546 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.47s 2026-02-02 00:59:58.916557 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.45s 2026-02-02 00:59:58.916568 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.21s 2026-02-02 00:59:58.916579 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.19s 2026-02-02 00:59:58.916590 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.85s 2026-02-02 00:59:58.916602 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.61s 2026-02-02 00:59:58.916613 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.42s 2026-02-02 00:59:58.916623 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.32s 2026-02-02 00:59:58.916632 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.74s 2026-02-02 00:59:58.916642 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.60s 2026-02-02 00:59:58.916652 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-02-02 00:59:58.916662 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-02-02 00:59:58.916672 | orchestrator | 2026-02-02 00:59:58 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 00:59:58.916688 | orchestrator | 2026-02-02 00:59:58 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 00:59:58.916699 | orchestrator | 2026-02-02 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:01.963440 | orchestrator | 2026-02-02 01:00:01 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:01.963755 | orchestrator | 2026-02-02 01:00:01 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:01.963770 | orchestrator | 2026-02-02 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:05.018978 | orchestrator | 2026-02-02 01:00:05 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:05.019147 | orchestrator | 2026-02-02 01:00:05 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:05.019159 | orchestrator | 2026-02-02 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:08.062725 | orchestrator | 2026-02-02 01:00:08 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:08.063448 | orchestrator | 2026-02-02 01:00:08 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:08.063485 | orchestrator | 2026-02-02 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:11.119282 | orchestrator | 2026-02-02 01:00:11 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:11.121633 | orchestrator | 2026-02-02 01:00:11 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:11.121939 | orchestrator | 2026-02-02 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:14.178313 | orchestrator | 2026-02-02 01:00:14 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:14.179229 | orchestrator | 2026-02-02 01:00:14 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:14.179268 | orchestrator | 2026-02-02 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:17.232736 | orchestrator | 2026-02-02 01:00:17 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:17.236082 | orchestrator | 2026-02-02 01:00:17 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:17.236159 | orchestrator | 2026-02-02 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:20.283644 | orchestrator | 2026-02-02 01:00:20 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:20.284648 | orchestrator | 2026-02-02 01:00:20 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:20.284688 | orchestrator | 2026-02-02 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:23.341101 | orchestrator | 2026-02-02 01:00:23 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:23.342827 | orchestrator | 2026-02-02 01:00:23 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:23.343143 | orchestrator | 2026-02-02 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:26.391522 | orchestrator | 2026-02-02 01:00:26 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:26.392788 | orchestrator | 2026-02-02 01:00:26 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:26.392822 | orchestrator | 2026-02-02 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:29.435647 | orchestrator | 2026-02-02 01:00:29 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:29.437801 | orchestrator | 2026-02-02 01:00:29 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:29.437890 | orchestrator | 2026-02-02 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:32.486117 | orchestrator | 2026-02-02 01:00:32 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:32.487277 | orchestrator | 2026-02-02 01:00:32 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:32.487385 | orchestrator | 2026-02-02 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:35.528925 | orchestrator | 2026-02-02 01:00:35 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:35.529791 | orchestrator | 2026-02-02 01:00:35 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:35.529826 | orchestrator | 2026-02-02 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:38.570850 | orchestrator | 2026-02-02 01:00:38 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:38.572312 | orchestrator | 2026-02-02 01:00:38 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:38.572531 | orchestrator | 2026-02-02 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:41.612655 | orchestrator | 2026-02-02 01:00:41 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:41.614336 | orchestrator | 2026-02-02 01:00:41 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:41.614376 | orchestrator | 2026-02-02 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:44.660632 | orchestrator | 2026-02-02 01:00:44 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:44.662371 | orchestrator | 2026-02-02 01:00:44 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:44.662439 | orchestrator | 2026-02-02 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:47.701785 | orchestrator | 2026-02-02 01:00:47 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:47.704196 | orchestrator | 2026-02-02 01:00:47 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:47.704255 | orchestrator | 2026-02-02 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:50.740912 | orchestrator | 2026-02-02 01:00:50 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state STARTED 2026-02-02 01:00:50.743225 | orchestrator | 2026-02-02 01:00:50 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:50.743307 | orchestrator | 2026-02-02 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:53.789617 | orchestrator | 2026-02-02 01:00:53.790813 | orchestrator | 2026-02-02 01:00:53.790857 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-02-02 01:00:53.790877 | orchestrator | 2026-02-02 01:00:53.790895 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-02 01:00:53.790915 | orchestrator | Monday 02 February 2026 00:57:29 +0000 (0:00:00.102) 0:00:00.102 ******* 2026-02-02 01:00:53.790933 | orchestrator | ok: [localhost] => { 2026-02-02 01:00:53.790952 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-02-02 01:00:53.790995 | orchestrator | } 2026-02-02 01:00:53.791014 | orchestrator | 2026-02-02 01:00:53.791062 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-02-02 01:00:53.791080 | orchestrator | Monday 02 February 2026 00:57:29 +0000 (0:00:00.042) 0:00:00.145 ******* 2026-02-02 01:00:53.791097 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-02-02 01:00:53.791116 | orchestrator | ...ignoring 2026-02-02 01:00:53.791134 | orchestrator | 2026-02-02 01:00:53.791153 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-02-02 01:00:53.791171 | orchestrator | Monday 02 February 2026 00:57:32 +0000 (0:00:03.001) 0:00:03.146 ******* 2026-02-02 01:00:53.791228 | orchestrator | skipping: [localhost] 2026-02-02 01:00:53.791249 | orchestrator | 2026-02-02 01:00:53.791267 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-02-02 01:00:53.791286 | orchestrator | Monday 02 February 2026 00:57:32 +0000 (0:00:00.062) 0:00:03.209 ******* 2026-02-02 01:00:53.791305 | orchestrator | ok: [localhost] 2026-02-02 01:00:53.791324 | orchestrator | 2026-02-02 01:00:53.791344 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:00:53.791357 | orchestrator | 2026-02-02 01:00:53.791370 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:00:53.791383 | orchestrator | Monday 02 February 2026 00:57:32 +0000 (0:00:00.165) 0:00:03.374 ******* 2026-02-02 01:00:53.791397 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:00:53.791411 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:00:53.791425 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:00:53.791438 | orchestrator | 2026-02-02 01:00:53.791451 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:00:53.791464 | orchestrator | Monday 02 February 2026 00:57:32 +0000 (0:00:00.348) 0:00:03.723 ******* 2026-02-02 01:00:53.791477 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-02 01:00:53.791491 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-02 01:00:53.791505 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-02 01:00:53.791518 | orchestrator | 2026-02-02 01:00:53.791531 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-02 01:00:53.791545 | orchestrator | 2026-02-02 01:00:53.791558 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-02 01:00:53.791572 | orchestrator | Monday 02 February 2026 00:57:33 +0000 (0:00:00.620) 0:00:04.343 ******* 2026-02-02 01:00:53.791584 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 01:00:53.791597 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-02 01:00:53.791611 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-02 01:00:53.791623 | orchestrator | 2026-02-02 01:00:53.791636 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-02 01:00:53.791650 | orchestrator | Monday 02 February 2026 00:57:33 +0000 (0:00:00.392) 0:00:04.736 ******* 2026-02-02 01:00:53.791679 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:00:53.791691 | orchestrator | 2026-02-02 01:00:53.791702 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-02 01:00:53.791713 | orchestrator | Monday 02 February 2026 00:57:34 +0000 (0:00:00.648) 0:00:05.384 ******* 2026-02-02 01:00:53.791809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 01:00:53.791845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 01:00:53.791867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 01:00:53.791888 | orchestrator | 2026-02-02 01:00:53.791932 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-02 01:00:53.791945 | orchestrator | Monday 02 February 2026 00:57:37 +0000 (0:00:03.098) 0:00:08.482 ******* 2026-02-02 01:00:53.791957 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:00:53.792035 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.792047 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.792058 | orchestrator | 2026-02-02 01:00:53.792069 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-02 01:00:53.792080 | orchestrator | Monday 02 February 2026 00:57:38 +0000 (0:00:00.733) 0:00:09.215 ******* 2026-02-02 01:00:53.792091 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.792102 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.792113 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:00:53.792124 | orchestrator | 2026-02-02 01:00:53.792135 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-02 01:00:53.792146 | orchestrator | Monday 02 February 2026 00:57:39 +0000 (0:00:01.532) 0:00:10.748 ******* 2026-02-02 01:00:53.792158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 01:00:53.792212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 01:00:53.792237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 01:00:53.792250 | orchestrator | 2026-02-02 01:00:53.792262 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-02 01:00:53.792273 | orchestrator | Monday 02 February 2026 00:57:43 +0000 (0:00:03.559) 0:00:14.308 ******* 2026-02-02 01:00:53.792284 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.792295 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.792306 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:00:53.792317 | orchestrator | 2026-02-02 01:00:53.792328 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-02 01:00:53.792339 | orchestrator | Monday 02 February 2026 00:57:44 +0000 (0:00:01.108) 0:00:15.416 ******* 2026-02-02 01:00:53.792350 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:00:53.792366 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:00:53.792377 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:00:53.792388 | orchestrator | 2026-02-02 01:00:53.792399 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-02 01:00:53.792410 | orchestrator | Monday 02 February 2026 00:57:49 +0000 (0:00:04.993) 0:00:20.410 ******* 2026-02-02 01:00:53.792421 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:00:53.792433 | orchestrator | 2026-02-02 01:00:53.792444 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-02 01:00:53.792460 | orchestrator | Monday 02 February 2026 00:57:50 +0000 (0:00:00.778) 0:00:21.189 ******* 2026-02-02 01:00:53.792500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.792514 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.792530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.792542 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.792559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.792577 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.792588 | orchestrator | 2026-02-02 01:00:53.792598 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-02 01:00:53.792607 | orchestrator | Monday 02 February 2026 00:57:53 +0000 (0:00:02.990) 0:00:24.179 ******* 2026-02-02 01:00:53.792618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.792629 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.792652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.792670 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.792681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.792691 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.792701 | orchestrator | 2026-02-02 01:00:53.792711 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-02 01:00:53.792720 | orchestrator | Monday 02 February 2026 00:57:57 +0000 (0:00:04.570) 0:00:28.750 ******* 2026-02-02 01:00:53.792735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.792760 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.792784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.792804 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.792827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.792853 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.792868 | orchestrator | 2026-02-02 01:00:53.792882 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-02 01:00:53.792897 | orchestrator | Monday 02 February 2026 00:58:00 +0000 (0:00:02.695) 0:00:31.446 ******* 2026-02-02 01:00:53.792924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2026-02-02 01:00:53 | INFO  | Task dcfd60f5-cc4b-44ce-b8b2-2d0af133aec5 is in state SUCCESS 2026-02-02 01:00:53.792943 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 01:00:53.792995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 01:00:53.793041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 01:00:53.793064 | orchestrator | 2026-02-02 01:00:53.793082 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-02 01:00:53.793096 | orchestrator | Monday 02 February 2026 00:58:03 +0000 (0:00:03.229) 0:00:34.675 ******* 2026-02-02 01:00:53.793106 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:00:53.793116 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:00:53.793126 | orchestrator | } 2026-02-02 01:00:53.793135 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:00:53.793145 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:00:53.793155 | orchestrator | } 2026-02-02 01:00:53.793165 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:00:53.793174 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:00:53.793184 | orchestrator | } 2026-02-02 01:00:53.793200 | orchestrator | 2026-02-02 01:00:53.793210 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:00:53.793220 | orchestrator | Monday 02 February 2026 00:58:04 +0000 (0:00:00.456) 0:00:35.132 ******* 2026-02-02 01:00:53.793235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.793255 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.793282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.793300 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.793334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.793352 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.793367 | orchestrator | 2026-02-02 01:00:53.793382 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-02 01:00:53.793397 | orchestrator | Monday 02 February 2026 00:58:06 +0000 (0:00:02.478) 0:00:37.610 ******* 2026-02-02 01:00:53.793413 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.793429 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.793443 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.793458 | orchestrator | 2026-02-02 01:00:53.793474 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-02 01:00:53.793490 | orchestrator | Monday 02 February 2026 00:58:06 +0000 (0:00:00.322) 0:00:37.933 ******* 2026-02-02 01:00:53.793509 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.793524 | orchestrator | 2026-02-02 01:00:53.793540 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-02 01:00:53.793556 | orchestrator | Monday 02 February 2026 00:58:07 +0000 (0:00:00.120) 0:00:38.054 ******* 2026-02-02 01:00:53.793566 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.793575 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.793585 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.793595 | orchestrator | 2026-02-02 01:00:53.793612 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-02 01:00:53.793622 | orchestrator | Monday 02 February 2026 00:58:07 +0000 (0:00:00.454) 0:00:38.508 ******* 2026-02-02 01:00:53.793632 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.793642 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.793652 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.793661 | orchestrator | 2026-02-02 01:00:53.793672 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-02 01:00:53.793688 | orchestrator | Monday 02 February 2026 00:58:07 +0000 (0:00:00.301) 0:00:38.810 ******* 2026-02-02 01:00:53.793703 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.793718 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.793733 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.793749 | orchestrator | 2026-02-02 01:00:53.793764 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-02 01:00:53.793793 | orchestrator | Monday 02 February 2026 00:58:08 +0000 (0:00:00.371) 0:00:39.181 ******* 2026-02-02 01:00:53.793811 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.793828 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.793845 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.793858 | orchestrator | 2026-02-02 01:00:53.793867 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-02 01:00:53.793877 | orchestrator | Monday 02 February 2026 00:58:08 +0000 (0:00:00.476) 0:00:39.657 ******* 2026-02-02 01:00:53.793887 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.793896 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.793906 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.793915 | orchestrator | 2026-02-02 01:00:53.793925 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-02 01:00:53.793934 | orchestrator | Monday 02 February 2026 00:58:09 +0000 (0:00:00.820) 0:00:40.477 ******* 2026-02-02 01:00:53.793944 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.793954 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.793985 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.793995 | orchestrator | 2026-02-02 01:00:53.794005 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-02 01:00:53.794050 | orchestrator | Monday 02 February 2026 00:58:09 +0000 (0:00:00.401) 0:00:40.879 ******* 2026-02-02 01:00:53.794063 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 01:00:53.794073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 01:00:53.794082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 01:00:53.794092 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.794102 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-02 01:00:53.794112 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-02 01:00:53.794121 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-02 01:00:53.794131 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.794141 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-02 01:00:53.794151 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-02 01:00:53.794160 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-02 01:00:53.794170 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.794180 | orchestrator | 2026-02-02 01:00:53.794190 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-02 01:00:53.794199 | orchestrator | Monday 02 February 2026 00:58:10 +0000 (0:00:00.413) 0:00:41.293 ******* 2026-02-02 01:00:53.794209 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.794219 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.794229 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.794238 | orchestrator | 2026-02-02 01:00:53.794254 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-02 01:00:53.794265 | orchestrator | Monday 02 February 2026 00:58:10 +0000 (0:00:00.401) 0:00:41.694 ******* 2026-02-02 01:00:53.794274 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.794284 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.794294 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.794303 | orchestrator | 2026-02-02 01:00:53.794313 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-02 01:00:53.794323 | orchestrator | Monday 02 February 2026 00:58:11 +0000 (0:00:00.434) 0:00:42.129 ******* 2026-02-02 01:00:53.794332 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.794342 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.794352 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.794361 | orchestrator | 2026-02-02 01:00:53.794371 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-02 01:00:53.794381 | orchestrator | Monday 02 February 2026 00:58:11 +0000 (0:00:00.649) 0:00:42.778 ******* 2026-02-02 01:00:53.794398 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.794407 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.794417 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.794427 | orchestrator | 2026-02-02 01:00:53.794436 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-02 01:00:53.794446 | orchestrator | Monday 02 February 2026 00:58:12 +0000 (0:00:00.381) 0:00:43.160 ******* 2026-02-02 01:00:53.794456 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.794465 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.794475 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.794484 | orchestrator | 2026-02-02 01:00:53.794494 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-02 01:00:53.794504 | orchestrator | Monday 02 February 2026 00:58:12 +0000 (0:00:00.472) 0:00:43.632 ******* 2026-02-02 01:00:53.794513 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.794523 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.794533 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.794542 | orchestrator | 2026-02-02 01:00:53.794552 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-02 01:00:53.794562 | orchestrator | Monday 02 February 2026 00:58:13 +0000 (0:00:00.396) 0:00:44.029 ******* 2026-02-02 01:00:53.794571 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.794581 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.794601 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.794611 | orchestrator | 2026-02-02 01:00:53.794621 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-02 01:00:53.794631 | orchestrator | Monday 02 February 2026 00:58:13 +0000 (0:00:00.597) 0:00:44.627 ******* 2026-02-02 01:00:53.794640 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.794650 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.794659 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.794669 | orchestrator | 2026-02-02 01:00:53.794678 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-02 01:00:53.794688 | orchestrator | Monday 02 February 2026 00:58:13 +0000 (0:00:00.365) 0:00:44.992 ******* 2026-02-02 01:00:53.794700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.794723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.794741 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.794751 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.794762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.794773 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.794783 | orchestrator | 2026-02-02 01:00:53.794793 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-02 01:00:53.794809 | orchestrator | Monday 02 February 2026 00:58:16 +0000 (0:00:02.603) 0:00:47.596 ******* 2026-02-02 01:00:53.794818 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.794828 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.794837 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.794847 | orchestrator | 2026-02-02 01:00:53.794857 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-02 01:00:53.794866 | orchestrator | Monday 02 February 2026 00:58:16 +0000 (0:00:00.343) 0:00:47.939 ******* 2026-02-02 01:00:53.794891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.794903 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.794914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.794931 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.794943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 01:00:53.794953 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.794988 | orchestrator | 2026-02-02 01:00:53.794998 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-02 01:00:53.795008 | orchestrator | Monday 02 February 2026 00:58:19 +0000 (0:00:02.627) 0:00:50.567 ******* 2026-02-02 01:00:53.795017 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.795027 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.795042 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.795052 | orchestrator | 2026-02-02 01:00:53.795062 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-02 01:00:53.795071 | orchestrator | Monday 02 February 2026 00:58:19 +0000 (0:00:00.338) 0:00:50.906 ******* 2026-02-02 01:00:53.795081 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.795105 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.795126 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.795135 | orchestrator | 2026-02-02 01:00:53.795145 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-02 01:00:53.795155 | orchestrator | Monday 02 February 2026 00:58:20 +0000 (0:00:00.359) 0:00:51.265 ******* 2026-02-02 01:00:53.795165 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.795175 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.795184 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.795194 | orchestrator | 2026-02-02 01:00:53.795204 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-02 01:00:53.795214 | orchestrator | Monday 02 February 2026 00:58:20 +0000 (0:00:00.336) 0:00:51.602 ******* 2026-02-02 01:00:53.795223 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.795233 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.795243 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.795253 | orchestrator | 2026-02-02 01:00:53.795262 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-02 01:00:53.795278 | orchestrator | Monday 02 February 2026 00:58:21 +0000 (0:00:00.806) 0:00:52.409 ******* 2026-02-02 01:00:53.795288 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.795298 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.795307 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.795317 | orchestrator | 2026-02-02 01:00:53.795327 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-02 01:00:53.795336 | orchestrator | Monday 02 February 2026 00:58:21 +0000 (0:00:00.340) 0:00:52.750 ******* 2026-02-02 01:00:53.795346 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:00:53.795356 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:00:53.795365 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:00:53.795375 | orchestrator | 2026-02-02 01:00:53.795385 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-02 01:00:53.795394 | orchestrator | Monday 02 February 2026 00:58:22 +0000 (0:00:00.960) 0:00:53.710 ******* 2026-02-02 01:00:53.795404 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:00:53.795415 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:00:53.795424 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:00:53.795434 | orchestrator | 2026-02-02 01:00:53.795444 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-02 01:00:53.795453 | orchestrator | Monday 02 February 2026 00:58:23 +0000 (0:00:00.671) 0:00:54.382 ******* 2026-02-02 01:00:53.795463 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:00:53.795473 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:00:53.795562 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:00:53.795585 | orchestrator | 2026-02-02 01:00:53.795595 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-02 01:00:53.795604 | orchestrator | Monday 02 February 2026 00:58:23 +0000 (0:00:00.367) 0:00:54.749 ******* 2026-02-02 01:00:53.795615 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-02 01:00:53.795626 | orchestrator | ...ignoring 2026-02-02 01:00:53.795636 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-02 01:00:53.795646 | orchestrator | ...ignoring 2026-02-02 01:00:53.795660 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-02 01:00:53.795670 | orchestrator | ...ignoring 2026-02-02 01:00:53.795679 | orchestrator | 2026-02-02 01:00:53.795689 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-02 01:00:53.795699 | orchestrator | Monday 02 February 2026 00:58:34 +0000 (0:00:10.804) 0:01:05.553 ******* 2026-02-02 01:00:53.795708 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:00:53.795718 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:00:53.795728 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:00:53.795737 | orchestrator | 2026-02-02 01:00:53.795747 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-02 01:00:53.795756 | orchestrator | Monday 02 February 2026 00:58:34 +0000 (0:00:00.390) 0:01:05.943 ******* 2026-02-02 01:00:53.795766 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.795776 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.795785 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.795795 | orchestrator | 2026-02-02 01:00:53.795804 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-02 01:00:53.795814 | orchestrator | Monday 02 February 2026 00:58:35 +0000 (0:00:00.618) 0:01:06.562 ******* 2026-02-02 01:00:53.795824 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.795833 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.795843 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.795852 | orchestrator | 2026-02-02 01:00:53.795862 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-02 01:00:53.795878 | orchestrator | Monday 02 February 2026 00:58:35 +0000 (0:00:00.383) 0:01:06.945 ******* 2026-02-02 01:00:53.795887 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.795897 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.795907 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.795916 | orchestrator | 2026-02-02 01:00:53.795926 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-02 01:00:53.795935 | orchestrator | Monday 02 February 2026 00:58:36 +0000 (0:00:00.337) 0:01:07.282 ******* 2026-02-02 01:00:53.795945 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:00:53.795955 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:00:53.795995 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:00:53.796005 | orchestrator | 2026-02-02 01:00:53.796015 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-02 01:00:53.796033 | orchestrator | Monday 02 February 2026 00:58:36 +0000 (0:00:00.337) 0:01:07.619 ******* 2026-02-02 01:00:53.796044 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.796053 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.796063 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.796072 | orchestrator | 2026-02-02 01:00:53.796082 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-02 01:00:53.796092 | orchestrator | Monday 02 February 2026 00:58:37 +0000 (0:00:00.590) 0:01:08.210 ******* 2026-02-02 01:00:53.796101 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.796111 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.796121 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-02 01:00:53.796130 | orchestrator | 2026-02-02 01:00:53.796140 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-02 01:00:53.796153 | orchestrator | Monday 02 February 2026 00:58:37 +0000 (0:00:00.452) 0:01:08.662 ******* 2026-02-02 01:00:53.796169 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:00:53.796186 | orchestrator | 2026-02-02 01:00:53.796196 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-02 01:00:53.796206 | orchestrator | Monday 02 February 2026 00:58:48 +0000 (0:00:10.633) 0:01:19.296 ******* 2026-02-02 01:00:53.796216 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:00:53.796225 | orchestrator | 2026-02-02 01:00:53.796235 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-02 01:00:53.796245 | orchestrator | Monday 02 February 2026 00:58:48 +0000 (0:00:00.183) 0:01:19.480 ******* 2026-02-02 01:00:53.796255 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.796264 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.796274 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.796283 | orchestrator | 2026-02-02 01:00:53.796293 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-02 01:00:53.796302 | orchestrator | Monday 02 February 2026 00:58:49 +0000 (0:00:01.015) 0:01:20.495 ******* 2026-02-02 01:00:53.796312 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:00:53.796321 | orchestrator | 2026-02-02 01:00:53.796331 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-02 01:00:53.796340 | orchestrator | Monday 02 February 2026 00:58:57 +0000 (0:00:08.468) 0:01:28.964 ******* 2026-02-02 01:00:53.796350 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:00:53.796359 | orchestrator | 2026-02-02 01:00:53.796369 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-02 01:00:53.796379 | orchestrator | Monday 02 February 2026 00:58:59 +0000 (0:00:01.679) 0:01:30.644 ******* 2026-02-02 01:00:53.796388 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:00:53.796398 | orchestrator | 2026-02-02 01:00:53.796407 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-02 01:00:53.796417 | orchestrator | Monday 02 February 2026 00:59:02 +0000 (0:00:02.370) 0:01:33.014 ******* 2026-02-02 01:00:53.796427 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:00:53.796443 | orchestrator | 2026-02-02 01:00:53.796453 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-02 01:00:53.796463 | orchestrator | Monday 02 February 2026 00:59:02 +0000 (0:00:00.127) 0:01:33.141 ******* 2026-02-02 01:00:53.796472 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.796482 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.796491 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.796501 | orchestrator | 2026-02-02 01:00:53.796511 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-02 01:00:53.796520 | orchestrator | Monday 02 February 2026 00:59:02 +0000 (0:00:00.396) 0:01:33.538 ******* 2026-02-02 01:00:53.796530 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:00:53.796540 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:00:53.796554 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:00:53.796564 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-02 01:00:53.796577 | orchestrator | 2026-02-02 01:00:53.796594 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-02 01:00:53.796610 | orchestrator | skipping: no hosts matched 2026-02-02 01:00:53.796623 | orchestrator | 2026-02-02 01:00:53.796648 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-02 01:00:53.796666 | orchestrator | 2026-02-02 01:00:53.796681 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-02 01:00:53.796697 | orchestrator | Monday 02 February 2026 00:59:03 +0000 (0:00:00.582) 0:01:34.120 ******* 2026-02-02 01:00:53.796713 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:00:53.796728 | orchestrator | 2026-02-02 01:00:53.796744 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-02 01:00:53.796760 | orchestrator | Monday 02 February 2026 00:59:20 +0000 (0:00:17.412) 0:01:51.532 ******* 2026-02-02 01:00:53.796776 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:00:53.796792 | orchestrator | 2026-02-02 01:00:53.796809 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-02 01:00:53.796825 | orchestrator | Monday 02 February 2026 00:59:36 +0000 (0:00:15.535) 0:02:07.068 ******* 2026-02-02 01:00:53.796842 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:00:53.796857 | orchestrator | 2026-02-02 01:00:53.796875 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-02 01:00:53.796890 | orchestrator | 2026-02-02 01:00:53.796908 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-02 01:00:53.796918 | orchestrator | Monday 02 February 2026 00:59:38 +0000 (0:00:02.011) 0:02:09.080 ******* 2026-02-02 01:00:53.796928 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:00:53.796937 | orchestrator | 2026-02-02 01:00:53.796947 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-02 01:00:53.796956 | orchestrator | Monday 02 February 2026 00:59:54 +0000 (0:00:16.152) 0:02:25.232 ******* 2026-02-02 01:00:53.796997 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:00:53.797010 | orchestrator | 2026-02-02 01:00:53.797023 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-02 01:00:53.797041 | orchestrator | Monday 02 February 2026 01:00:09 +0000 (0:00:15.723) 0:02:40.956 ******* 2026-02-02 01:00:53.797051 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:00:53.797062 | orchestrator | 2026-02-02 01:00:53.797089 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-02 01:00:53.797105 | orchestrator | 2026-02-02 01:00:53.797121 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-02 01:00:53.797136 | orchestrator | Monday 02 February 2026 01:00:12 +0000 (0:00:02.526) 0:02:43.482 ******* 2026-02-02 01:00:53.797153 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:00:53.797169 | orchestrator | 2026-02-02 01:00:53.797184 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-02 01:00:53.797199 | orchestrator | Monday 02 February 2026 01:00:30 +0000 (0:00:17.719) 0:03:01.202 ******* 2026-02-02 01:00:53.797214 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:00:53.797242 | orchestrator | 2026-02-02 01:00:53.797258 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-02 01:00:53.797274 | orchestrator | Monday 02 February 2026 01:00:30 +0000 (0:00:00.585) 0:03:01.787 ******* 2026-02-02 01:00:53.797289 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:00:53.797304 | orchestrator | 2026-02-02 01:00:53.797318 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-02 01:00:53.797334 | orchestrator | 2026-02-02 01:00:53.797349 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-02 01:00:53.797364 | orchestrator | Monday 02 February 2026 01:00:33 +0000 (0:00:02.319) 0:03:04.107 ******* 2026-02-02 01:00:53.797379 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:00:53.797395 | orchestrator | 2026-02-02 01:00:53.797411 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-02 01:00:53.797427 | orchestrator | Monday 02 February 2026 01:00:33 +0000 (0:00:00.567) 0:03:04.674 ******* 2026-02-02 01:00:53.797443 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.797459 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.797475 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:00:53.797491 | orchestrator | 2026-02-02 01:00:53.797507 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-02 01:00:53.797522 | orchestrator | Monday 02 February 2026 01:00:35 +0000 (0:00:02.195) 0:03:06.870 ******* 2026-02-02 01:00:53.797538 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.797555 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.797572 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:00:53.797587 | orchestrator | 2026-02-02 01:00:53.797602 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-02 01:00:53.797619 | orchestrator | Monday 02 February 2026 01:00:38 +0000 (0:00:02.391) 0:03:09.261 ******* 2026-02-02 01:00:53.797634 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.797651 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.797737 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:00:53.797752 | orchestrator | 2026-02-02 01:00:53.797762 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-02 01:00:53.797772 | orchestrator | Monday 02 February 2026 01:00:40 +0000 (0:00:02.199) 0:03:11.461 ******* 2026-02-02 01:00:53.797782 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.797791 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.797801 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:00:53.797810 | orchestrator | 2026-02-02 01:00:53.797820 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-02 01:00:53.797830 | orchestrator | Monday 02 February 2026 01:00:42 +0000 (0:00:02.195) 0:03:13.657 ******* 2026-02-02 01:00:53.797841 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: ansible.module_utils.basic.AnsibleModule.fail_json() got multiple values for keyword argument 'changed' 2026-02-02 01:00:53.797885 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\n response.raise_for_status()\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\n raise HTTPError(http_error_msg, response=self)\nrequests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.47/containers/fd4786952ef505ad9649bfd55453bedb54f1cba3a53a2c2aa5c4cd00f3edd2c0/json\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/tmp/ansible_kolla_container_facts_payload_18j3fp5k/ansible_kolla_container_facts_payload.zip/ansible/modules/kolla_container_facts.py\", line 251, in main\n File \"/tmp/ansible_kolla_container_facts_payload_18j3fp5k/ansible_kolla_container_facts_payload.zip/ansible/modules/kolla_container_facts.py\", line 141, in get_containers\n File \"/usr/lib/python3/dist-packages/docker/models/containers.py\", line 1018, in list\n containers.append(self.get(r['Id']))\n ^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3/dist-packages/docker/models/containers.py\", line 954, in get\n resp = self.client.api.inspect_container(container_id)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3/dist-packages/docker/utils/decorators.py\", line 19, in wrapped\n return f(self, resource_id, *args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3/dist-packages/docker/api/container.py\", line 793, in inspect_container\n return self._result(\n ^^^^^^^^^^^^^\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 281, in _result\n self._raise_for_status(response)\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\n raise create_api_error_from_http_exception(e) from e\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\n raise cls(e, response=response, explanation=explanation) from e\ndocker.errors.NotFound: 404 Client Error for http+docker://localhost/v1.47/containers/fd4786952ef505ad9649bfd55453bedb54f1cba3a53a2c2aa5c4cd00f3edd2c0/json: Not Found (\"No such container: fd4786952ef505ad9649bfd55453bedb54f1cba3a53a2c2aa5c4cd00f3edd2c0\")\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"\", line 107, in \n File \"\", line 99, in _ansiballz_main\n File \"\", line 47, in invoke_module\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_kolla_container_facts_payload_18j3fp5k/ansible_kolla_container_facts_payload.zip/ansible/modules/kolla_container_facts.py\", line 259, in \n File \"/tmp/ansible_kolla_container_facts_payload_18j3fp5k/ansible_kolla_container_facts_payload.zip/ansible/modules/kolla_container_facts.py\", line 254, in main\nTypeError: ansible.module_utils.basic.AnsibleModule.fail_json() got multiple values for keyword argument 'changed'\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-02-02 01:00:53.797910 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:00:53.797921 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:00:53.797931 | orchestrator | 2026-02-02 01:00:53.797941 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-02 01:00:53.797951 | orchestrator | Monday 02 February 2026 01:00:47 +0000 (0:00:04.615) 0:03:18.272 ******* 2026-02-02 01:00:53.798008 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.798090 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.798100 | orchestrator | 2026-02-02 01:00:53.798110 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-02 01:00:53.798120 | orchestrator | Monday 02 February 2026 01:00:49 +0000 (0:00:02.313) 0:03:20.586 ******* 2026-02-02 01:00:53.798129 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.798139 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.798148 | orchestrator | 2026-02-02 01:00:53.798164 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-02 01:00:53.798174 | orchestrator | Monday 02 February 2026 01:00:50 +0000 (0:00:00.603) 0:03:21.189 ******* 2026-02-02 01:00:53.798184 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:00:53.798194 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:00:53.798203 | orchestrator | 2026-02-02 01:00:53.798213 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-02 01:00:53.798223 | orchestrator | Monday 02 February 2026 01:00:52 +0000 (0:00:02.667) 0:03:23.857 ******* 2026-02-02 01:00:53.798241 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:00:53.798251 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:00:53.798260 | orchestrator | 2026-02-02 01:00:53.798270 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:00:53.798280 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-02 01:00:53.798291 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=1  skipped=36  rescued=0 ignored=1  2026-02-02 01:00:53.798301 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-02-02 01:00:53.798312 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-02-02 01:00:53.798322 | orchestrator | 2026-02-02 01:00:53.798332 | orchestrator | 2026-02-02 01:00:53.798342 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:00:53.798351 | orchestrator | Monday 02 February 2026 01:00:53 +0000 (0:00:00.152) 0:03:24.010 ******* 2026-02-02 01:00:53.798361 | orchestrator | =============================================================================== 2026-02-02 01:00:53.798371 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.56s 2026-02-02 01:00:53.798381 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.26s 2026-02-02 01:00:53.798399 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.72s 2026-02-02 01:00:53.798409 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.80s 2026-02-02 01:00:53.798419 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.63s 2026-02-02 01:00:53.798428 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.47s 2026-02-02 01:00:53.798438 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.99s 2026-02-02 01:00:53.798447 | orchestrator | service-check : mariadb | Get container facts --------------------------- 4.62s 2026-02-02 01:00:53.798457 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 4.57s 2026-02-02 01:00:53.798467 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.54s 2026-02-02 01:00:53.798476 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.56s 2026-02-02 01:00:53.798486 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.23s 2026-02-02 01:00:53.798496 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.10s 2026-02-02 01:00:53.798505 | orchestrator | Check MariaDB service --------------------------------------------------- 3.00s 2026-02-02 01:00:53.798515 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.99s 2026-02-02 01:00:53.798524 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.70s 2026-02-02 01:00:53.798534 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.67s 2026-02-02 01:00:53.798543 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.63s 2026-02-02 01:00:53.798553 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.60s 2026-02-02 01:00:53.798563 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.48s 2026-02-02 01:00:53.798573 | orchestrator | 2026-02-02 01:00:53 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:53.798583 | orchestrator | 2026-02-02 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:56.852042 | orchestrator | 2026-02-02 01:00:56 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:00:56.852785 | orchestrator | 2026-02-02 01:00:56 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:56.854312 | orchestrator | 2026-02-02 01:00:56 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:00:56.854363 | orchestrator | 2026-02-02 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:00:59.911064 | orchestrator | 2026-02-02 01:00:59 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:00:59.911304 | orchestrator | 2026-02-02 01:00:59 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:00:59.912067 | orchestrator | 2026-02-02 01:00:59 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:00:59.912135 | orchestrator | 2026-02-02 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:02.953286 | orchestrator | 2026-02-02 01:01:02 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:02.953772 | orchestrator | 2026-02-02 01:01:02 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:01:02.954734 | orchestrator | 2026-02-02 01:01:02 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:02.954772 | orchestrator | 2026-02-02 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:05.995508 | orchestrator | 2026-02-02 01:01:05 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:05.996896 | orchestrator | 2026-02-02 01:01:05 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:01:05.999104 | orchestrator | 2026-02-02 01:01:06 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:05.999180 | orchestrator | 2026-02-02 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:09.045766 | orchestrator | 2026-02-02 01:01:09 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:09.046655 | orchestrator | 2026-02-02 01:01:09 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:01:09.047384 | orchestrator | 2026-02-02 01:01:09 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:09.047407 | orchestrator | 2026-02-02 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:12.080925 | orchestrator | 2026-02-02 01:01:12 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:12.082774 | orchestrator | 2026-02-02 01:01:12 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:01:12.083181 | orchestrator | 2026-02-02 01:01:12 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:12.083246 | orchestrator | 2026-02-02 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:15.120552 | orchestrator | 2026-02-02 01:01:15 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:15.121296 | orchestrator | 2026-02-02 01:01:15 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:01:15.122429 | orchestrator | 2026-02-02 01:01:15 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:15.122646 | orchestrator | 2026-02-02 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:18.161725 | orchestrator | 2026-02-02 01:01:18 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:18.161828 | orchestrator | 2026-02-02 01:01:18 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:01:18.162556 | orchestrator | 2026-02-02 01:01:18 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:18.162588 | orchestrator | 2026-02-02 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:21.199020 | orchestrator | 2026-02-02 01:01:21 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:21.199883 | orchestrator | 2026-02-02 01:01:21 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:01:21.202529 | orchestrator | 2026-02-02 01:01:21 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:21.202576 | orchestrator | 2026-02-02 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:24.242263 | orchestrator | 2026-02-02 01:01:24 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:24.243050 | orchestrator | 2026-02-02 01:01:24 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:01:24.244655 | orchestrator | 2026-02-02 01:01:24 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:24.244872 | orchestrator | 2026-02-02 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:27.279574 | orchestrator | 2026-02-02 01:01:27 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:27.280108 | orchestrator | 2026-02-02 01:01:27 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:01:27.281043 | orchestrator | 2026-02-02 01:01:27 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:27.281088 | orchestrator | 2026-02-02 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:30.332505 | orchestrator | 2026-02-02 01:01:30 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:30.336003 | orchestrator | 2026-02-02 01:01:30 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:01:30.338478 | orchestrator | 2026-02-02 01:01:30 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:30.338554 | orchestrator | 2026-02-02 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:33.381413 | orchestrator | 2026-02-02 01:01:33 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:33.383042 | orchestrator | 2026-02-02 01:01:33 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:01:33.384839 | orchestrator | 2026-02-02 01:01:33 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:33.384854 | orchestrator | 2026-02-02 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:36.425090 | orchestrator | 2026-02-02 01:01:36 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:36.426088 | orchestrator | 2026-02-02 01:01:36 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state STARTED 2026-02-02 01:01:36.426810 | orchestrator | 2026-02-02 01:01:36 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:36.426885 | orchestrator | 2026-02-02 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:39.480582 | orchestrator | 2026-02-02 01:01:39 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:39.482980 | orchestrator | 2026-02-02 01:01:39 | INFO  | Task dad75ec2-42f8-4fac-b1fe-1595c188d191 is in state SUCCESS 2026-02-02 01:01:39.486709 | orchestrator | 2026-02-02 01:01:39.486813 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-02 01:01:39.486945 | orchestrator | 2.16.14 2026-02-02 01:01:39.486978 | orchestrator | 2026-02-02 01:01:39.486997 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-02 01:01:39.487015 | orchestrator | 2026-02-02 01:01:39.487027 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 01:01:39.487038 | orchestrator | Monday 02 February 2026 00:59:28 +0000 (0:00:00.641) 0:00:00.641 ******* 2026-02-02 01:01:39.487049 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 01:01:39.487061 | orchestrator | 2026-02-02 01:01:39.487072 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 01:01:39.487084 | orchestrator | Monday 02 February 2026 00:59:29 +0000 (0:00:00.667) 0:00:01.308 ******* 2026-02-02 01:01:39.487095 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:01:39.487650 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.487674 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:01:39.487687 | orchestrator | 2026-02-02 01:01:39.487699 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 01:01:39.487711 | orchestrator | Monday 02 February 2026 00:59:29 +0000 (0:00:00.606) 0:00:01.915 ******* 2026-02-02 01:01:39.487787 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.487800 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:01:39.487812 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:01:39.487833 | orchestrator | 2026-02-02 01:01:39.487853 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 01:01:39.487871 | orchestrator | Monday 02 February 2026 00:59:30 +0000 (0:00:00.300) 0:00:02.215 ******* 2026-02-02 01:01:39.487890 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.488284 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:01:39.488303 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:01:39.488314 | orchestrator | 2026-02-02 01:01:39.488326 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 01:01:39.488337 | orchestrator | Monday 02 February 2026 00:59:30 +0000 (0:00:00.759) 0:00:02.975 ******* 2026-02-02 01:01:39.488348 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.488358 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:01:39.488369 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:01:39.488380 | orchestrator | 2026-02-02 01:01:39.488391 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 01:01:39.488402 | orchestrator | Monday 02 February 2026 00:59:31 +0000 (0:00:00.272) 0:00:03.248 ******* 2026-02-02 01:01:39.488413 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.488423 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:01:39.488434 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:01:39.488445 | orchestrator | 2026-02-02 01:01:39.488456 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 01:01:39.488467 | orchestrator | Monday 02 February 2026 00:59:31 +0000 (0:00:00.301) 0:00:03.549 ******* 2026-02-02 01:01:39.488478 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.488488 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:01:39.488499 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:01:39.488510 | orchestrator | 2026-02-02 01:01:39.488529 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 01:01:39.488548 | orchestrator | Monday 02 February 2026 00:59:31 +0000 (0:00:00.289) 0:00:03.838 ******* 2026-02-02 01:01:39.488591 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.488610 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.488627 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.488646 | orchestrator | 2026-02-02 01:01:39.488665 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 01:01:39.488702 | orchestrator | Monday 02 February 2026 00:59:32 +0000 (0:00:00.429) 0:00:04.268 ******* 2026-02-02 01:01:39.488721 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.488734 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:01:39.488763 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:01:39.488773 | orchestrator | 2026-02-02 01:01:39.488784 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 01:01:39.488795 | orchestrator | Monday 02 February 2026 00:59:32 +0000 (0:00:00.290) 0:00:04.558 ******* 2026-02-02 01:01:39.488806 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 01:01:39.488824 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 01:01:39.488842 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 01:01:39.488860 | orchestrator | 2026-02-02 01:01:39.488878 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 01:01:39.488898 | orchestrator | Monday 02 February 2026 00:59:33 +0000 (0:00:00.609) 0:00:05.168 ******* 2026-02-02 01:01:39.488948 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.488965 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:01:39.488978 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:01:39.488991 | orchestrator | 2026-02-02 01:01:39.489003 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 01:01:39.489016 | orchestrator | Monday 02 February 2026 00:59:33 +0000 (0:00:00.418) 0:00:05.586 ******* 2026-02-02 01:01:39.489029 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 01:01:39.489042 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 01:01:39.489054 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 01:01:39.489066 | orchestrator | 2026-02-02 01:01:39.489080 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 01:01:39.489093 | orchestrator | Monday 02 February 2026 00:59:35 +0000 (0:00:01.902) 0:00:07.489 ******* 2026-02-02 01:01:39.489106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 01:01:39.489119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 01:01:39.489133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 01:01:39.489146 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.489159 | orchestrator | 2026-02-02 01:01:39.489273 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 01:01:39.489292 | orchestrator | Monday 02 February 2026 00:59:35 +0000 (0:00:00.551) 0:00:08.040 ******* 2026-02-02 01:01:39.489306 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.489321 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.489332 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.489343 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.489359 | orchestrator | 2026-02-02 01:01:39.489377 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 01:01:39.489408 | orchestrator | Monday 02 February 2026 00:59:36 +0000 (0:00:00.743) 0:00:08.784 ******* 2026-02-02 01:01:39.489430 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.489469 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.489489 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.489510 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.489526 | orchestrator | 2026-02-02 01:01:39.489554 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 01:01:39.489572 | orchestrator | Monday 02 February 2026 00:59:37 +0000 (0:00:00.374) 0:00:09.158 ******* 2026-02-02 01:01:39.489592 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '49cda33426cb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 00:59:34.130370', 'end': '2026-02-02 00:59:34.165677', 'delta': '0:00:00.035307', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['49cda33426cb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 01:01:39.489615 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6ca172e6f1c1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 00:59:34.749497', 'end': '2026-02-02 00:59:34.779346', 'delta': '0:00:00.029849', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6ca172e6f1c1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 01:01:39.489721 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5e0feb5d97d6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 00:59:35.282201', 'end': '2026-02-02 00:59:35.313360', 'delta': '0:00:00.031159', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5e0feb5d97d6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 01:01:39.489747 | orchestrator | 2026-02-02 01:01:39.489766 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 01:01:39.489784 | orchestrator | Monday 02 February 2026 00:59:37 +0000 (0:00:00.193) 0:00:09.352 ******* 2026-02-02 01:01:39.489803 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.489822 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:01:39.489847 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:01:39.489869 | orchestrator | 2026-02-02 01:01:39.489902 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 01:01:39.490082 | orchestrator | Monday 02 February 2026 00:59:37 +0000 (0:00:00.456) 0:00:09.809 ******* 2026-02-02 01:01:39.490097 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-02 01:01:39.490108 | orchestrator | 2026-02-02 01:01:39.490119 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 01:01:39.490130 | orchestrator | Monday 02 February 2026 00:59:39 +0000 (0:00:01.517) 0:00:11.327 ******* 2026-02-02 01:01:39.490141 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.490153 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.490163 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.490173 | orchestrator | 2026-02-02 01:01:39.490182 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 01:01:39.490192 | orchestrator | Monday 02 February 2026 00:59:39 +0000 (0:00:00.296) 0:00:11.623 ******* 2026-02-02 01:01:39.490202 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.490211 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.490221 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.490231 | orchestrator | 2026-02-02 01:01:39.490240 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 01:01:39.490250 | orchestrator | Monday 02 February 2026 00:59:40 +0000 (0:00:00.457) 0:00:12.081 ******* 2026-02-02 01:01:39.490260 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.490269 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.490279 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.490288 | orchestrator | 2026-02-02 01:01:39.490298 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 01:01:39.490308 | orchestrator | Monday 02 February 2026 00:59:40 +0000 (0:00:00.584) 0:00:12.665 ******* 2026-02-02 01:01:39.490317 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.490327 | orchestrator | 2026-02-02 01:01:39.490337 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 01:01:39.490346 | orchestrator | Monday 02 February 2026 00:59:40 +0000 (0:00:00.136) 0:00:12.802 ******* 2026-02-02 01:01:39.490356 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.490366 | orchestrator | 2026-02-02 01:01:39.490384 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 01:01:39.490394 | orchestrator | Monday 02 February 2026 00:59:40 +0000 (0:00:00.232) 0:00:13.035 ******* 2026-02-02 01:01:39.490403 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.490413 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.490423 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.490432 | orchestrator | 2026-02-02 01:01:39.490442 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 01:01:39.490452 | orchestrator | Monday 02 February 2026 00:59:41 +0000 (0:00:00.308) 0:00:13.343 ******* 2026-02-02 01:01:39.490462 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.490471 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.490481 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.490490 | orchestrator | 2026-02-02 01:01:39.490500 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 01:01:39.490510 | orchestrator | Monday 02 February 2026 00:59:41 +0000 (0:00:00.316) 0:00:13.660 ******* 2026-02-02 01:01:39.490519 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.490529 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.490539 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.490548 | orchestrator | 2026-02-02 01:01:39.490558 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 01:01:39.490568 | orchestrator | Monday 02 February 2026 00:59:42 +0000 (0:00:00.537) 0:00:14.197 ******* 2026-02-02 01:01:39.490578 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.490587 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.490597 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.490615 | orchestrator | 2026-02-02 01:01:39.490624 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 01:01:39.490634 | orchestrator | Monday 02 February 2026 00:59:42 +0000 (0:00:00.323) 0:00:14.520 ******* 2026-02-02 01:01:39.490645 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.490654 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.490664 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.490673 | orchestrator | 2026-02-02 01:01:39.490683 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 01:01:39.490693 | orchestrator | Monday 02 February 2026 00:59:42 +0000 (0:00:00.341) 0:00:14.862 ******* 2026-02-02 01:01:39.490703 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.490712 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.490722 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.490787 | orchestrator | 2026-02-02 01:01:39.490799 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 01:01:39.490810 | orchestrator | Monday 02 February 2026 00:59:43 +0000 (0:00:00.313) 0:00:15.175 ******* 2026-02-02 01:01:39.490827 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.490843 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.490861 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.490877 | orchestrator | 2026-02-02 01:01:39.490892 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 01:01:39.491048 | orchestrator | Monday 02 February 2026 00:59:43 +0000 (0:00:00.527) 0:00:15.703 ******* 2026-02-02 01:01:39.491098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91c179ef--578a--54fb--a2b0--5b892bd3ac18-osd--block--91c179ef--578a--54fb--a2b0--5b892bd3ac18', 'dm-uuid-LVM-CBmyVChEmESNLeBT1MkMSINSk2ajOcjEVW1F4EfsafZbwh6CUXur0lvhuPqTJPPb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91730114--ee0c--5e20--9378--f20099298830-osd--block--91730114--ee0c--5e20--9378--f20099298830', 'dm-uuid-LVM-cUnpQzpbDAyiRw22abVs1EKRXWL8W9zR4MbPrzayvu0R20HyrCa9xvCO30c5hMd1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--91c179ef--578a--54fb--a2b0--5b892bd3ac18-osd--block--91c179ef--578a--54fb--a2b0--5b892bd3ac18'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6sQpnu-oeog-9uSQ-8irI-YO3i-03S8-oT4k1a', 'scsi-0QEMU_QEMU_HARDDISK_08cb0a0c-22c5-4be4-bae6-cc486e6cf4a8', 'scsi-SQEMU_QEMU_HARDDISK_08cb0a0c-22c5-4be4-bae6-cc486e6cf4a8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--91730114--ee0c--5e20--9378--f20099298830-osd--block--91730114--ee0c--5e20--9378--f20099298830'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XwcXUw-Uu1v-fj1R-vnAq-N5DG-f8Qb-00N7Lp', 'scsi-0QEMU_QEMU_HARDDISK_2791ba27-caf1-4c6d-bd50-3e0320bbaa42', 'scsi-SQEMU_QEMU_HARDDISK_2791ba27-caf1-4c6d-bd50-3e0320bbaa42'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0ea612a-524a-49e0-9350-b51de64b4b0f', 'scsi-SQEMU_QEMU_HARDDISK_b0ea612a-524a-49e0-9350-b51de64b4b0f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--604951f0--1bde--54b3--957a--2369560b0fa2-osd--block--604951f0--1bde--54b3--957a--2369560b0fa2', 'dm-uuid-LVM-3ZXwImDyw4fF3NRZj3QiF1GeKyro3QPIB5j6nFCKbMCu9pskJdjnWtnHnR26AsdQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edd20676--fc89--5b2b--b977--99722e90cce2-osd--block--edd20676--fc89--5b2b--b977--99722e90cce2', 'dm-uuid-LVM-oNyUsA3TIQFQdmqZqwf2HNQmWcYixTjxAT4UNR5pMsPpjE904WGrtx3GjqJt36Nz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491427 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.491436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7-osd--block--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7', 'dm-uuid-LVM-OEBwCC3YuzLICYw8CHTrGZLG0LKYZkCfZwwIAaAKsinrIAxMVCTNbWKnsZs1YSLA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0f572543--3461--541d--9614--18cfec52b251-osd--block--0f572543--3461--541d--9614--18cfec52b251', 'dm-uuid-LVM-VwC4RRIV6z7NJSpHMy12KJpoxleDt2OKYXZRfTXCKg782JFNl5F2SUpzRNLA70fM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--604951f0--1bde--54b3--957a--2369560b0fa2-osd--block--604951f0--1bde--54b3--957a--2369560b0fa2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HKpUVV-X4dA-EDHp-ilRF-QPyn-Eq8n-30cnG2', 'scsi-0QEMU_QEMU_HARDDISK_8be21885-29f7-4026-87ce-cd032f624f70', 'scsi-SQEMU_QEMU_HARDDISK_8be21885-29f7-4026-87ce-cd032f624f70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--edd20676--fc89--5b2b--b977--99722e90cce2-osd--block--edd20676--fc89--5b2b--b977--99722e90cce2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OR8WAX-OQWJ-XEg9-Wwht-yTg9-CKMR-3T1I3f', 'scsi-0QEMU_QEMU_HARDDISK_78cf2400-96ef-4814-8ef8-9c5b7903f7b2', 'scsi-SQEMU_QEMU_HARDDISK_78cf2400-96ef-4814-8ef8-9c5b7903f7b2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2465f817-ac52-4990-8055-49becb307e2f', 'scsi-SQEMU_QEMU_HARDDISK_2465f817-ac52-4990-8055-49becb307e2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491671 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.491683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 01:01:39.491715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7-osd--block--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WGi4of-ix8g-g3UD-7qOZ-6j2X-fOzY-1PZkAt', 'scsi-0QEMU_QEMU_HARDDISK_09bfd8bd-6f2f-4d2c-8da9-081114b71f81', 'scsi-SQEMU_QEMU_HARDDISK_09bfd8bd-6f2f-4d2c-8da9-081114b71f81'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0f572543--3461--541d--9614--18cfec52b251-osd--block--0f572543--3461--541d--9614--18cfec52b251'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YhDmgn-v3yM-kiZc-JIhA-3oL5-HNY3-C0uZ5o', 'scsi-0QEMU_QEMU_HARDDISK_16c98517-e0bb-4d3e-881d-5a0c6479c324', 'scsi-SQEMU_QEMU_HARDDISK_16c98517-e0bb-4d3e-881d-5a0c6479c324'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6ca68c3-b66c-4649-954f-01a9ba336075', 'scsi-SQEMU_QEMU_HARDDISK_b6ca68c3-b66c-4649-954f-01a9ba336075'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 01:01:39.491774 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.491782 | orchestrator | 2026-02-02 01:01:39.491791 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 01:01:39.491799 | orchestrator | Monday 02 February 2026 00:59:44 +0000 (0:00:00.637) 0:00:16.340 ******* 2026-02-02 01:01:39.491807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91c179ef--578a--54fb--a2b0--5b892bd3ac18-osd--block--91c179ef--578a--54fb--a2b0--5b892bd3ac18', 'dm-uuid-LVM-CBmyVChEmESNLeBT1MkMSINSk2ajOcjEVW1F4EfsafZbwh6CUXur0lvhuPqTJPPb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.491834 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91730114--ee0c--5e20--9378--f20099298830-osd--block--91730114--ee0c--5e20--9378--f20099298830', 'dm-uuid-LVM-cUnpQzpbDAyiRw22abVs1EKRXWL8W9zR4MbPrzayvu0R20HyrCa9xvCO30c5hMd1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.491863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.491872 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.491881 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.491896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.491905 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.491943 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--604951f0--1bde--54b3--957a--2369560b0fa2-osd--block--604951f0--1bde--54b3--957a--2369560b0fa2', 'dm-uuid-LVM-3ZXwImDyw4fF3NRZj3QiF1GeKyro3QPIB5j6nFCKbMCu9pskJdjnWtnHnR26AsdQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.491965 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.491978 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edd20676--fc89--5b2b--b977--99722e90cce2-osd--block--edd20676--fc89--5b2b--b977--99722e90cce2', 'dm-uuid-LVM-oNyUsA3TIQFQdmqZqwf2HNQmWcYixTjxAT4UNR5pMsPpjE904WGrtx3GjqJt36Nz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.491986 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492000 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492009 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492018 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492031 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492051 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492067 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_ed850c6f-7155-455b-802e-f8313bdcc2ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492082 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492097 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--91c179ef--578a--54fb--a2b0--5b892bd3ac18-osd--block--91c179ef--578a--54fb--a2b0--5b892bd3ac18'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6sQpnu-oeog-9uSQ-8irI-YO3i-03S8-oT4k1a', 'scsi-0QEMU_QEMU_HARDDISK_08cb0a0c-22c5-4be4-bae6-cc486e6cf4a8', 'scsi-SQEMU_QEMU_HARDDISK_08cb0a0c-22c5-4be4-bae6-cc486e6cf4a8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492107 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492121 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--91730114--ee0c--5e20--9378--f20099298830-osd--block--91730114--ee0c--5e20--9378--f20099298830'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XwcXUw-Uu1v-fj1R-vnAq-N5DG-f8Qb-00N7Lp', 'scsi-0QEMU_QEMU_HARDDISK_2791ba27-caf1-4c6d-bd50-3e0320bbaa42', 'scsi-SQEMU_QEMU_HARDDISK_2791ba27-caf1-4c6d-bd50-3e0320bbaa42'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492130 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492144 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0ea612a-524a-49e0-9350-b51de64b4b0f', 'scsi-SQEMU_QEMU_HARDDISK_b0ea612a-524a-49e0-9350-b51de64b4b0f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492163 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_923eb351-799f-412b-88c1-7c0ba22434bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492173 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492187 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--604951f0--1bde--54b3--957a--2369560b0fa2-osd--block--604951f0--1bde--54b3--957a--2369560b0fa2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HKpUVV-X4dA-EDHp-ilRF-QPyn-Eq8n-30cnG2', 'scsi-0QEMU_QEMU_HARDDISK_8be21885-29f7-4026-87ce-cd032f624f70', 'scsi-SQEMU_QEMU_HARDDISK_8be21885-29f7-4026-87ce-cd032f624f70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492199 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--edd20676--fc89--5b2b--b977--99722e90cce2-osd--block--edd20676--fc89--5b2b--b977--99722e90cce2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OR8WAX-OQWJ-XEg9-Wwht-yTg9-CKMR-3T1I3f', 'scsi-0QEMU_QEMU_HARDDISK_78cf2400-96ef-4814-8ef8-9c5b7903f7b2', 'scsi-SQEMU_QEMU_HARDDISK_78cf2400-96ef-4814-8ef8-9c5b7903f7b2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492207 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.492216 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2465f817-ac52-4990-8055-49becb307e2f', 'scsi-SQEMU_QEMU_HARDDISK_2465f817-ac52-4990-8055-49becb307e2f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492230 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492239 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.492247 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7-osd--block--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7', 'dm-uuid-LVM-OEBwCC3YuzLICYw8CHTrGZLG0LKYZkCfZwwIAaAKsinrIAxMVCTNbWKnsZs1YSLA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492260 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0f572543--3461--541d--9614--18cfec52b251-osd--block--0f572543--3461--541d--9614--18cfec52b251', 'dm-uuid-LVM-VwC4RRIV6z7NJSpHMy12KJpoxleDt2OKYXZRfTXCKg782JFNl5F2SUpzRNLA70fM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492272 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492280 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492288 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492302 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492317 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492325 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492333 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492345 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492359 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f095ad0-f8d9-4c0e-a607-577effe431db-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492374 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7-osd--block--ee22aeb6--8be3--5eb7--a208--f7c11744cdf7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WGi4of-ix8g-g3UD-7qOZ-6j2X-fOzY-1PZkAt', 'scsi-0QEMU_QEMU_HARDDISK_09bfd8bd-6f2f-4d2c-8da9-081114b71f81', 'scsi-SQEMU_QEMU_HARDDISK_09bfd8bd-6f2f-4d2c-8da9-081114b71f81'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492387 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0f572543--3461--541d--9614--18cfec52b251-osd--block--0f572543--3461--541d--9614--18cfec52b251'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YhDmgn-v3yM-kiZc-JIhA-3oL5-HNY3-C0uZ5o', 'scsi-0QEMU_QEMU_HARDDISK_16c98517-e0bb-4d3e-881d-5a0c6479c324', 'scsi-SQEMU_QEMU_HARDDISK_16c98517-e0bb-4d3e-881d-5a0c6479c324'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492395 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6ca68c3-b66c-4649-954f-01a9ba336075', 'scsi-SQEMU_QEMU_HARDDISK_b6ca68c3-b66c-4649-954f-01a9ba336075'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492407 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 01:01:39.492421 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.492429 | orchestrator | 2026-02-02 01:01:39.492437 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 01:01:39.492445 | orchestrator | Monday 02 February 2026 00:59:45 +0000 (0:00:00.768) 0:00:17.109 ******* 2026-02-02 01:01:39.492453 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.492461 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:01:39.492469 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:01:39.492476 | orchestrator | 2026-02-02 01:01:39.492484 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 01:01:39.492492 | orchestrator | Monday 02 February 2026 00:59:45 +0000 (0:00:00.739) 0:00:17.849 ******* 2026-02-02 01:01:39.492500 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.492507 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:01:39.492515 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:01:39.492523 | orchestrator | 2026-02-02 01:01:39.492530 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 01:01:39.492538 | orchestrator | Monday 02 February 2026 00:59:46 +0000 (0:00:00.556) 0:00:18.405 ******* 2026-02-02 01:01:39.492546 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.492553 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:01:39.492561 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:01:39.492569 | orchestrator | 2026-02-02 01:01:39.492577 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 01:01:39.492584 | orchestrator | Monday 02 February 2026 00:59:47 +0000 (0:00:00.683) 0:00:19.089 ******* 2026-02-02 01:01:39.492592 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.492600 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.492608 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.492615 | orchestrator | 2026-02-02 01:01:39.492623 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 01:01:39.492631 | orchestrator | Monday 02 February 2026 00:59:47 +0000 (0:00:00.329) 0:00:19.418 ******* 2026-02-02 01:01:39.492639 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.492647 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.492655 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.492663 | orchestrator | 2026-02-02 01:01:39.492671 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 01:01:39.492679 | orchestrator | Monday 02 February 2026 00:59:47 +0000 (0:00:00.446) 0:00:19.865 ******* 2026-02-02 01:01:39.492687 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.492694 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.492702 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.492710 | orchestrator | 2026-02-02 01:01:39.492718 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 01:01:39.492725 | orchestrator | Monday 02 February 2026 00:59:48 +0000 (0:00:00.563) 0:00:20.428 ******* 2026-02-02 01:01:39.492733 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-02 01:01:39.492741 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-02 01:01:39.492749 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-02 01:01:39.492756 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-02 01:01:39.492768 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-02 01:01:39.492776 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-02 01:01:39.492784 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-02 01:01:39.492791 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-02 01:01:39.492799 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-02 01:01:39.492807 | orchestrator | 2026-02-02 01:01:39.492820 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 01:01:39.492828 | orchestrator | Monday 02 February 2026 00:59:49 +0000 (0:00:00.888) 0:00:21.317 ******* 2026-02-02 01:01:39.492836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 01:01:39.492844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 01:01:39.492851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 01:01:39.492859 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.492867 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-02 01:01:39.492875 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-02 01:01:39.492882 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-02 01:01:39.492890 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.492898 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-02 01:01:39.492906 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-02 01:01:39.492931 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-02 01:01:39.492939 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.492947 | orchestrator | 2026-02-02 01:01:39.492955 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 01:01:39.492963 | orchestrator | Monday 02 February 2026 00:59:49 +0000 (0:00:00.387) 0:00:21.705 ******* 2026-02-02 01:01:39.492971 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 01:01:39.492979 | orchestrator | 2026-02-02 01:01:39.492988 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 01:01:39.492996 | orchestrator | Monday 02 February 2026 00:59:50 +0000 (0:00:00.842) 0:00:22.547 ******* 2026-02-02 01:01:39.493008 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.493016 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.493024 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.493031 | orchestrator | 2026-02-02 01:01:39.493039 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 01:01:39.493047 | orchestrator | Monday 02 February 2026 00:59:50 +0000 (0:00:00.344) 0:00:22.891 ******* 2026-02-02 01:01:39.493055 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.493063 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.493071 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.493078 | orchestrator | 2026-02-02 01:01:39.493086 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 01:01:39.493094 | orchestrator | Monday 02 February 2026 00:59:51 +0000 (0:00:00.316) 0:00:23.208 ******* 2026-02-02 01:01:39.493102 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.493110 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.493118 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:01:39.493125 | orchestrator | 2026-02-02 01:01:39.493133 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 01:01:39.493141 | orchestrator | Monday 02 February 2026 00:59:51 +0000 (0:00:00.320) 0:00:23.528 ******* 2026-02-02 01:01:39.493149 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.493157 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:01:39.493175 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:01:39.493184 | orchestrator | 2026-02-02 01:01:39.493199 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 01:01:39.493207 | orchestrator | Monday 02 February 2026 00:59:52 +0000 (0:00:00.681) 0:00:24.210 ******* 2026-02-02 01:01:39.493215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 01:01:39.493223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 01:01:39.493231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 01:01:39.493239 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.493252 | orchestrator | 2026-02-02 01:01:39.493260 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 01:01:39.493268 | orchestrator | Monday 02 February 2026 00:59:52 +0000 (0:00:00.384) 0:00:24.594 ******* 2026-02-02 01:01:39.493276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 01:01:39.493284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 01:01:39.493292 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 01:01:39.493300 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.493307 | orchestrator | 2026-02-02 01:01:39.493315 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 01:01:39.493323 | orchestrator | Monday 02 February 2026 00:59:52 +0000 (0:00:00.369) 0:00:24.964 ******* 2026-02-02 01:01:39.493331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 01:01:39.493339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 01:01:39.493347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 01:01:39.493355 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.493363 | orchestrator | 2026-02-02 01:01:39.493371 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 01:01:39.493379 | orchestrator | Monday 02 February 2026 00:59:53 +0000 (0:00:00.422) 0:00:25.386 ******* 2026-02-02 01:01:39.493387 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:01:39.493395 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:01:39.493402 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:01:39.493410 | orchestrator | 2026-02-02 01:01:39.493418 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 01:01:39.493432 | orchestrator | Monday 02 February 2026 00:59:53 +0000 (0:00:00.340) 0:00:25.727 ******* 2026-02-02 01:01:39.493440 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 01:01:39.493448 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-02 01:01:39.493456 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-02 01:01:39.493464 | orchestrator | 2026-02-02 01:01:39.493472 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 01:01:39.493480 | orchestrator | Monday 02 February 2026 00:59:54 +0000 (0:00:00.550) 0:00:26.278 ******* 2026-02-02 01:01:39.493488 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 01:01:39.493495 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 01:01:39.493503 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 01:01:39.493511 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-02 01:01:39.493519 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 01:01:39.493527 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 01:01:39.493535 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 01:01:39.493543 | orchestrator | 2026-02-02 01:01:39.493551 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 01:01:39.493559 | orchestrator | Monday 02 February 2026 00:59:55 +0000 (0:00:01.075) 0:00:27.353 ******* 2026-02-02 01:01:39.493567 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 01:01:39.493574 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 01:01:39.493582 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 01:01:39.493590 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-02 01:01:39.493598 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 01:01:39.493607 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 01:01:39.493618 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 01:01:39.493632 | orchestrator | 2026-02-02 01:01:39.493640 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-02 01:01:39.493648 | orchestrator | Monday 02 February 2026 00:59:57 +0000 (0:00:02.119) 0:00:29.473 ******* 2026-02-02 01:01:39.493656 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:01:39.493663 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:01:39.493671 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-02 01:01:39.493679 | orchestrator | 2026-02-02 01:01:39.493687 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-02 01:01:39.493695 | orchestrator | Monday 02 February 2026 00:59:57 +0000 (0:00:00.410) 0:00:29.884 ******* 2026-02-02 01:01:39.493704 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-02 01:01:39.493712 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-02 01:01:39.493721 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-02 01:01:39.493729 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-02 01:01:39.493737 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-02 01:01:39.493745 | orchestrator | 2026-02-02 01:01:39.493753 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-02 01:01:39.493761 | orchestrator | Monday 02 February 2026 01:00:44 +0000 (0:00:46.407) 0:01:16.291 ******* 2026-02-02 01:01:39.493769 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.493776 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.493788 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.493796 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.493804 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.493811 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.493819 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-02 01:01:39.493827 | orchestrator | 2026-02-02 01:01:39.493835 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-02 01:01:39.493843 | orchestrator | Monday 02 February 2026 01:01:08 +0000 (0:00:24.219) 0:01:40.511 ******* 2026-02-02 01:01:39.493851 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.493859 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.493866 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.493874 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.493886 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.493894 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.493902 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 01:01:39.493974 | orchestrator | 2026-02-02 01:01:39.493984 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-02 01:01:39.493992 | orchestrator | Monday 02 February 2026 01:01:20 +0000 (0:00:11.986) 0:01:52.497 ******* 2026-02-02 01:01:39.494000 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.494008 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 01:01:39.494045 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 01:01:39.494053 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.494061 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 01:01:39.494074 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 01:01:39.494083 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.494091 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 01:01:39.494098 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 01:01:39.494106 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.494114 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 01:01:39.494122 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 01:01:39.494130 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.494138 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 01:01:39.494146 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 01:01:39.494154 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 01:01:39.494162 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 01:01:39.494170 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 01:01:39.494178 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-02 01:01:39.494186 | orchestrator | 2026-02-02 01:01:39.494193 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:01:39.494202 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-02 01:01:39.494211 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-02 01:01:39.494219 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-02 01:01:39.494227 | orchestrator | 2026-02-02 01:01:39.494234 | orchestrator | 2026-02-02 01:01:39.494242 | orchestrator | 2026-02-02 01:01:39.494250 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:01:39.494258 | orchestrator | Monday 02 February 2026 01:01:38 +0000 (0:00:18.116) 0:02:10.613 ******* 2026-02-02 01:01:39.494266 | orchestrator | =============================================================================== 2026-02-02 01:01:39.494274 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.41s 2026-02-02 01:01:39.494281 | orchestrator | generate keys ---------------------------------------------------------- 24.22s 2026-02-02 01:01:39.494297 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.12s 2026-02-02 01:01:39.494305 | orchestrator | get keys from monitors ------------------------------------------------- 11.99s 2026-02-02 01:01:39.494312 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.12s 2026-02-02 01:01:39.494320 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.90s 2026-02-02 01:01:39.494333 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.52s 2026-02-02 01:01:39.494341 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.08s 2026-02-02 01:01:39.494348 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.89s 2026-02-02 01:01:39.494356 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.84s 2026-02-02 01:01:39.494364 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.77s 2026-02-02 01:01:39.494372 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.76s 2026-02-02 01:01:39.494380 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.74s 2026-02-02 01:01:39.494387 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.74s 2026-02-02 01:01:39.494395 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2026-02-02 01:01:39.494403 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.68s 2026-02-02 01:01:39.494411 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.67s 2026-02-02 01:01:39.494418 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.64s 2026-02-02 01:01:39.494426 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.61s 2026-02-02 01:01:39.494434 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.61s 2026-02-02 01:01:39.494442 | orchestrator | 2026-02-02 01:01:39 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:39.494450 | orchestrator | 2026-02-02 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:42.557990 | orchestrator | 2026-02-02 01:01:42 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:42.558291 | orchestrator | 2026-02-02 01:01:42 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:42.560735 | orchestrator | 2026-02-02 01:01:42 | INFO  | Task 1c2068f3-0f55-4561-bbd7-a511be3bd876 is in state STARTED 2026-02-02 01:01:42.560785 | orchestrator | 2026-02-02 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:45.616793 | orchestrator | 2026-02-02 01:01:45 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:45.618937 | orchestrator | 2026-02-02 01:01:45 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:45.621135 | orchestrator | 2026-02-02 01:01:45 | INFO  | Task 1c2068f3-0f55-4561-bbd7-a511be3bd876 is in state STARTED 2026-02-02 01:01:45.621183 | orchestrator | 2026-02-02 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:48.666409 | orchestrator | 2026-02-02 01:01:48 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:48.667954 | orchestrator | 2026-02-02 01:01:48 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:48.669553 | orchestrator | 2026-02-02 01:01:48 | INFO  | Task 1c2068f3-0f55-4561-bbd7-a511be3bd876 is in state STARTED 2026-02-02 01:01:48.669579 | orchestrator | 2026-02-02 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:51.711177 | orchestrator | 2026-02-02 01:01:51 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:51.712224 | orchestrator | 2026-02-02 01:01:51 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:51.714065 | orchestrator | 2026-02-02 01:01:51 | INFO  | Task 1c2068f3-0f55-4561-bbd7-a511be3bd876 is in state STARTED 2026-02-02 01:01:51.714115 | orchestrator | 2026-02-02 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:54.755471 | orchestrator | 2026-02-02 01:01:54 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:54.756426 | orchestrator | 2026-02-02 01:01:54 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:54.759060 | orchestrator | 2026-02-02 01:01:54 | INFO  | Task 1c2068f3-0f55-4561-bbd7-a511be3bd876 is in state STARTED 2026-02-02 01:01:54.759161 | orchestrator | 2026-02-02 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:01:57.803095 | orchestrator | 2026-02-02 01:01:57 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:01:57.804621 | orchestrator | 2026-02-02 01:01:57 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:01:57.805975 | orchestrator | 2026-02-02 01:01:57 | INFO  | Task 1c2068f3-0f55-4561-bbd7-a511be3bd876 is in state STARTED 2026-02-02 01:01:57.806043 | orchestrator | 2026-02-02 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:00.856165 | orchestrator | 2026-02-02 01:02:00 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:02:00.856982 | orchestrator | 2026-02-02 01:02:00 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:00.858374 | orchestrator | 2026-02-02 01:02:00 | INFO  | Task 1c2068f3-0f55-4561-bbd7-a511be3bd876 is in state STARTED 2026-02-02 01:02:00.858418 | orchestrator | 2026-02-02 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:03.907914 | orchestrator | 2026-02-02 01:02:03 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:02:03.909333 | orchestrator | 2026-02-02 01:02:03 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:03.911298 | orchestrator | 2026-02-02 01:02:03 | INFO  | Task 1c2068f3-0f55-4561-bbd7-a511be3bd876 is in state STARTED 2026-02-02 01:02:03.911374 | orchestrator | 2026-02-02 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:06.964722 | orchestrator | 2026-02-02 01:02:06 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:02:06.967460 | orchestrator | 2026-02-02 01:02:06 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:06.969071 | orchestrator | 2026-02-02 01:02:06 | INFO  | Task 1c2068f3-0f55-4561-bbd7-a511be3bd876 is in state STARTED 2026-02-02 01:02:06.969108 | orchestrator | 2026-02-02 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:10.029178 | orchestrator | 2026-02-02 01:02:10 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:02:10.031749 | orchestrator | 2026-02-02 01:02:10 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:10.034366 | orchestrator | 2026-02-02 01:02:10 | INFO  | Task 1c2068f3-0f55-4561-bbd7-a511be3bd876 is in state STARTED 2026-02-02 01:02:10.034631 | orchestrator | 2026-02-02 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:13.080551 | orchestrator | 2026-02-02 01:02:13 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:02:13.081621 | orchestrator | 2026-02-02 01:02:13 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:13.083252 | orchestrator | 2026-02-02 01:02:13 | INFO  | Task 1c2068f3-0f55-4561-bbd7-a511be3bd876 is in state STARTED 2026-02-02 01:02:13.083331 | orchestrator | 2026-02-02 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:16.129531 | orchestrator | 2026-02-02 01:02:16 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:02:16.131621 | orchestrator | 2026-02-02 01:02:16 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:16.133642 | orchestrator | 2026-02-02 01:02:16 | INFO  | Task 1c2068f3-0f55-4561-bbd7-a511be3bd876 is in state STARTED 2026-02-02 01:02:16.133690 | orchestrator | 2026-02-02 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:19.180486 | orchestrator | 2026-02-02 01:02:19 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:02:19.181983 | orchestrator | 2026-02-02 01:02:19 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:19.183849 | orchestrator | 2026-02-02 01:02:19 | INFO  | Task 1c2068f3-0f55-4561-bbd7-a511be3bd876 is in state STARTED 2026-02-02 01:02:19.183917 | orchestrator | 2026-02-02 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:22.234852 | orchestrator | 2026-02-02 01:02:22 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:02:22.235388 | orchestrator | 2026-02-02 01:02:22 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:02:22.239106 | orchestrator | 2026-02-02 01:02:22 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:22.240934 | orchestrator | 2026-02-02 01:02:22 | INFO  | Task 1c2068f3-0f55-4561-bbd7-a511be3bd876 is in state SUCCESS 2026-02-02 01:02:22.241001 | orchestrator | 2026-02-02 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:25.285371 | orchestrator | 2026-02-02 01:02:25 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:02:25.286313 | orchestrator | 2026-02-02 01:02:25 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:02:25.288082 | orchestrator | 2026-02-02 01:02:25 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:25.288134 | orchestrator | 2026-02-02 01:02:25 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:28.332187 | orchestrator | 2026-02-02 01:02:28 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:02:28.333797 | orchestrator | 2026-02-02 01:02:28 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:02:28.338630 | orchestrator | 2026-02-02 01:02:28 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:28.338767 | orchestrator | 2026-02-02 01:02:28 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:31.389232 | orchestrator | 2026-02-02 01:02:31 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state STARTED 2026-02-02 01:02:31.390526 | orchestrator | 2026-02-02 01:02:31 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:02:31.390570 | orchestrator | 2026-02-02 01:02:31 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:31.390578 | orchestrator | 2026-02-02 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:34.437747 | orchestrator | 2026-02-02 01:02:34 | INFO  | Task e580bd53-c876-4509-be6f-01a148add2d5 is in state SUCCESS 2026-02-02 01:02:34.437983 | orchestrator | 2026-02-02 01:02:34.437998 | orchestrator | 2026-02-02 01:02:34.438003 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-02 01:02:34.438007 | orchestrator | 2026-02-02 01:02:34.438012 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-02 01:02:34.438049 | orchestrator | Monday 02 February 2026 01:01:43 +0000 (0:00:00.195) 0:00:00.195 ******* 2026-02-02 01:02:34.438054 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-02 01:02:34.438059 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-02 01:02:34.438063 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-02 01:02:34.438067 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-02 01:02:34.438071 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-02 01:02:34.438075 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-02 01:02:34.438079 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-02 01:02:34.438083 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-02 01:02:34.438087 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-02 01:02:34.438091 | orchestrator | 2026-02-02 01:02:34.438095 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-02 01:02:34.438098 | orchestrator | Monday 02 February 2026 01:01:48 +0000 (0:00:04.816) 0:00:05.011 ******* 2026-02-02 01:02:34.438102 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-02 01:02:34.438106 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-02 01:02:34.438110 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-02 01:02:34.438114 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-02 01:02:34.438117 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-02 01:02:34.438121 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-02 01:02:34.438125 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-02 01:02:34.438129 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-02 01:02:34.438133 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-02 01:02:34.438137 | orchestrator | 2026-02-02 01:02:34.438141 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-02 01:02:34.438145 | orchestrator | Monday 02 February 2026 01:01:52 +0000 (0:00:04.422) 0:00:09.434 ******* 2026-02-02 01:02:34.438149 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-02 01:02:34.438153 | orchestrator | 2026-02-02 01:02:34.438157 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-02 01:02:34.438161 | orchestrator | Monday 02 February 2026 01:01:54 +0000 (0:00:01.145) 0:00:10.579 ******* 2026-02-02 01:02:34.438165 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-02 01:02:34.438169 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-02 01:02:34.438173 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-02 01:02:34.438188 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-02 01:02:34.438198 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-02 01:02:34.438202 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-02 01:02:34.438206 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-02 01:02:34.438210 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-02 01:02:34.438214 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-02 01:02:34.438217 | orchestrator | 2026-02-02 01:02:34.438221 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-02 01:02:34.438225 | orchestrator | Monday 02 February 2026 01:02:09 +0000 (0:00:15.200) 0:00:25.780 ******* 2026-02-02 01:02:34.438229 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-02 01:02:34.438233 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-02 01:02:34.438237 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-02 01:02:34.438241 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-02 01:02:34.438250 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-02 01:02:34.438254 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-02 01:02:34.438258 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-02 01:02:34.438262 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-02 01:02:34.438266 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-02 01:02:34.438269 | orchestrator | 2026-02-02 01:02:34.438273 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-02 01:02:34.438277 | orchestrator | Monday 02 February 2026 01:02:12 +0000 (0:00:03.138) 0:00:28.919 ******* 2026-02-02 01:02:34.438282 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-02 01:02:34.438286 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-02 01:02:34.438290 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-02 01:02:34.438293 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-02 01:02:34.438297 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-02 01:02:34.438301 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-02 01:02:34.438305 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-02 01:02:34.438309 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-02 01:02:34.438312 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-02 01:02:34.438316 | orchestrator | 2026-02-02 01:02:34.438320 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:02:34.438324 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:02:34.438328 | orchestrator | 2026-02-02 01:02:34.438332 | orchestrator | 2026-02-02 01:02:34.438336 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:02:34.438340 | orchestrator | Monday 02 February 2026 01:02:19 +0000 (0:00:07.235) 0:00:36.154 ******* 2026-02-02 01:02:34.438343 | orchestrator | =============================================================================== 2026-02-02 01:02:34.438347 | orchestrator | Write ceph keys to the share directory --------------------------------- 15.20s 2026-02-02 01:02:34.438351 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.24s 2026-02-02 01:02:34.438358 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.82s 2026-02-02 01:02:34.438362 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.42s 2026-02-02 01:02:34.438366 | orchestrator | Check if target directories exist --------------------------------------- 3.14s 2026-02-02 01:02:34.438369 | orchestrator | Create share directory -------------------------------------------------- 1.15s 2026-02-02 01:02:34.438373 | orchestrator | 2026-02-02 01:02:34.440067 | orchestrator | 2026-02-02 01:02:34.440097 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:02:34.440102 | orchestrator | 2026-02-02 01:02:34.440106 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:02:34.440110 | orchestrator | Monday 02 February 2026 01:00:58 +0000 (0:00:00.277) 0:00:00.277 ******* 2026-02-02 01:02:34.440114 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:02:34.440119 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:02:34.440123 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:02:34.440127 | orchestrator | 2026-02-02 01:02:34.440131 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:02:34.440135 | orchestrator | Monday 02 February 2026 01:00:58 +0000 (0:00:00.322) 0:00:00.600 ******* 2026-02-02 01:02:34.440138 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-02 01:02:34.440143 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-02 01:02:34.440147 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-02 01:02:34.440151 | orchestrator | 2026-02-02 01:02:34.440162 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-02 01:02:34.440168 | orchestrator | 2026-02-02 01:02:34.440175 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-02 01:02:34.440181 | orchestrator | Monday 02 February 2026 01:00:58 +0000 (0:00:00.485) 0:00:01.086 ******* 2026-02-02 01:02:34.440188 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:02:34.440195 | orchestrator | 2026-02-02 01:02:34.440202 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-02 01:02:34.440208 | orchestrator | Monday 02 February 2026 01:00:59 +0000 (0:00:00.548) 0:00:01.634 ******* 2026-02-02 01:02:34.440220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 01:02:34.440254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 01:02:34.440260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 01:02:34.440269 | orchestrator | 2026-02-02 01:02:34.440273 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-02 01:02:34.440277 | orchestrator | Monday 02 February 2026 01:01:00 +0000 (0:00:01.317) 0:00:02.952 ******* 2026-02-02 01:02:34.440281 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:02:34.440285 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:02:34.440289 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:02:34.440292 | orchestrator | 2026-02-02 01:02:34.440296 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-02 01:02:34.440303 | orchestrator | Monday 02 February 2026 01:01:01 +0000 (0:00:00.540) 0:00:03.493 ******* 2026-02-02 01:02:34.440307 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-02 01:02:34.440311 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-02 01:02:34.440315 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-02 01:02:34.440318 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-02 01:02:34.440322 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-02 01:02:34.440326 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-02 01:02:34.440330 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-02 01:02:34.440336 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-02 01:02:34.440340 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-02 01:02:34.440344 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-02 01:02:34.440348 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-02 01:02:34.440352 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-02 01:02:34.440356 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-02 01:02:34.440360 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-02 01:02:34.440363 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-02 01:02:34.440367 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-02 01:02:34.440371 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-02 01:02:34.440375 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-02 01:02:34.440379 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-02 01:02:34.440383 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-02 01:02:34.440386 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-02 01:02:34.440390 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-02 01:02:34.440397 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-02 01:02:34.440401 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-02 01:02:34.440405 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-02 01:02:34.440411 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-02 01:02:34.440415 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-02 01:02:34.440419 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-02 01:02:34.440595 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-02 01:02:34.440603 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-02 01:02:34.440609 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-02 01:02:34.440615 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-02 01:02:34.440621 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-02 01:02:34.440628 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-02 01:02:34.440634 | orchestrator | 2026-02-02 01:02:34.440640 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 01:02:34.440646 | orchestrator | Monday 02 February 2026 01:01:02 +0000 (0:00:00.846) 0:00:04.340 ******* 2026-02-02 01:02:34.440652 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:02:34.440658 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:02:34.440663 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:02:34.440669 | orchestrator | 2026-02-02 01:02:34.440680 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 01:02:34.440686 | orchestrator | Monday 02 February 2026 01:01:02 +0000 (0:00:00.368) 0:00:04.709 ******* 2026-02-02 01:02:34.440692 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.440698 | orchestrator | 2026-02-02 01:02:34.440704 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 01:02:34.440709 | orchestrator | Monday 02 February 2026 01:01:02 +0000 (0:00:00.126) 0:00:04.835 ******* 2026-02-02 01:02:34.440715 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.440721 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.440727 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.440733 | orchestrator | 2026-02-02 01:02:34.440738 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 01:02:34.440744 | orchestrator | Monday 02 February 2026 01:01:03 +0000 (0:00:00.534) 0:00:05.370 ******* 2026-02-02 01:02:34.440750 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:02:34.440755 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:02:34.440761 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:02:34.440766 | orchestrator | 2026-02-02 01:02:34.440777 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 01:02:34.440783 | orchestrator | Monday 02 February 2026 01:01:03 +0000 (0:00:00.319) 0:00:05.690 ******* 2026-02-02 01:02:34.440795 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.440801 | orchestrator | 2026-02-02 01:02:34.440807 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 01:02:34.440813 | orchestrator | Monday 02 February 2026 01:01:03 +0000 (0:00:00.140) 0:00:05.830 ******* 2026-02-02 01:02:34.440818 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.440824 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.440830 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.440835 | orchestrator | 2026-02-02 01:02:34.440841 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 01:02:34.440863 | orchestrator | Monday 02 February 2026 01:01:03 +0000 (0:00:00.318) 0:00:06.149 ******* 2026-02-02 01:02:34.440869 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:02:34.440875 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:02:34.440881 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:02:34.440886 | orchestrator | 2026-02-02 01:02:34.440910 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 01:02:34.440918 | orchestrator | Monday 02 February 2026 01:01:04 +0000 (0:00:00.331) 0:00:06.481 ******* 2026-02-02 01:02:34.440924 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.440929 | orchestrator | 2026-02-02 01:02:34.440936 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 01:02:34.440943 | orchestrator | Monday 02 February 2026 01:01:04 +0000 (0:00:00.370) 0:00:06.852 ******* 2026-02-02 01:02:34.440947 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.440951 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.440955 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.440959 | orchestrator | 2026-02-02 01:02:34.440963 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 01:02:34.440967 | orchestrator | Monday 02 February 2026 01:01:04 +0000 (0:00:00.349) 0:00:07.201 ******* 2026-02-02 01:02:34.440971 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:02:34.440975 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:02:34.440978 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:02:34.440983 | orchestrator | 2026-02-02 01:02:34.440989 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 01:02:34.440995 | orchestrator | Monday 02 February 2026 01:01:05 +0000 (0:00:00.342) 0:00:07.543 ******* 2026-02-02 01:02:34.441001 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441070 | orchestrator | 2026-02-02 01:02:34.441077 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 01:02:34.441082 | orchestrator | Monday 02 February 2026 01:01:05 +0000 (0:00:00.133) 0:00:07.676 ******* 2026-02-02 01:02:34.441088 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441095 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.441101 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.441106 | orchestrator | 2026-02-02 01:02:34.441113 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 01:02:34.441119 | orchestrator | Monday 02 February 2026 01:01:05 +0000 (0:00:00.285) 0:00:07.962 ******* 2026-02-02 01:02:34.441126 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:02:34.441132 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:02:34.441138 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:02:34.441145 | orchestrator | 2026-02-02 01:02:34.441151 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 01:02:34.441157 | orchestrator | Monday 02 February 2026 01:01:06 +0000 (0:00:00.592) 0:00:08.554 ******* 2026-02-02 01:02:34.441164 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441170 | orchestrator | 2026-02-02 01:02:34.441177 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 01:02:34.441183 | orchestrator | Monday 02 February 2026 01:01:06 +0000 (0:00:00.152) 0:00:08.706 ******* 2026-02-02 01:02:34.441189 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441195 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.441210 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.441214 | orchestrator | 2026-02-02 01:02:34.441218 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 01:02:34.441222 | orchestrator | Monday 02 February 2026 01:01:06 +0000 (0:00:00.299) 0:00:09.006 ******* 2026-02-02 01:02:34.441226 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:02:34.441230 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:02:34.441233 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:02:34.441237 | orchestrator | 2026-02-02 01:02:34.441241 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 01:02:34.441245 | orchestrator | Monday 02 February 2026 01:01:07 +0000 (0:00:00.342) 0:00:09.349 ******* 2026-02-02 01:02:34.441249 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441253 | orchestrator | 2026-02-02 01:02:34.441257 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 01:02:34.441260 | orchestrator | Monday 02 February 2026 01:01:07 +0000 (0:00:00.134) 0:00:09.483 ******* 2026-02-02 01:02:34.441264 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441268 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.441279 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.441283 | orchestrator | 2026-02-02 01:02:34.441287 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 01:02:34.441291 | orchestrator | Monday 02 February 2026 01:01:07 +0000 (0:00:00.312) 0:00:09.796 ******* 2026-02-02 01:02:34.441295 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:02:34.441299 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:02:34.441302 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:02:34.441306 | orchestrator | 2026-02-02 01:02:34.441310 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 01:02:34.441314 | orchestrator | Monday 02 February 2026 01:01:08 +0000 (0:00:00.562) 0:00:10.359 ******* 2026-02-02 01:02:34.441318 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441322 | orchestrator | 2026-02-02 01:02:34.441325 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 01:02:34.441329 | orchestrator | Monday 02 February 2026 01:01:08 +0000 (0:00:00.118) 0:00:10.478 ******* 2026-02-02 01:02:34.441333 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441341 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.441345 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.441349 | orchestrator | 2026-02-02 01:02:34.441353 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 01:02:34.441357 | orchestrator | Monday 02 February 2026 01:01:08 +0000 (0:00:00.301) 0:00:10.779 ******* 2026-02-02 01:02:34.441360 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:02:34.441364 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:02:34.441368 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:02:34.441372 | orchestrator | 2026-02-02 01:02:34.441376 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 01:02:34.441380 | orchestrator | Monday 02 February 2026 01:01:08 +0000 (0:00:00.412) 0:00:11.192 ******* 2026-02-02 01:02:34.441383 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441387 | orchestrator | 2026-02-02 01:02:34.441391 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 01:02:34.441395 | orchestrator | Monday 02 February 2026 01:01:09 +0000 (0:00:00.165) 0:00:11.357 ******* 2026-02-02 01:02:34.441399 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441403 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.441407 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.441410 | orchestrator | 2026-02-02 01:02:34.441414 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 01:02:34.441418 | orchestrator | Monday 02 February 2026 01:01:09 +0000 (0:00:00.533) 0:00:11.891 ******* 2026-02-02 01:02:34.441422 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:02:34.441426 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:02:34.441430 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:02:34.441440 | orchestrator | 2026-02-02 01:02:34.441444 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 01:02:34.441448 | orchestrator | Monday 02 February 2026 01:01:09 +0000 (0:00:00.308) 0:00:12.200 ******* 2026-02-02 01:02:34.441452 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441456 | orchestrator | 2026-02-02 01:02:34.441460 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 01:02:34.441464 | orchestrator | Monday 02 February 2026 01:01:10 +0000 (0:00:00.141) 0:00:12.341 ******* 2026-02-02 01:02:34.441467 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441471 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.441475 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.441479 | orchestrator | 2026-02-02 01:02:34.441483 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 01:02:34.441486 | orchestrator | Monday 02 February 2026 01:01:10 +0000 (0:00:00.335) 0:00:12.678 ******* 2026-02-02 01:02:34.441490 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:02:34.441494 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:02:34.441498 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:02:34.441502 | orchestrator | 2026-02-02 01:02:34.441505 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 01:02:34.441509 | orchestrator | Monday 02 February 2026 01:01:10 +0000 (0:00:00.344) 0:00:13.022 ******* 2026-02-02 01:02:34.441513 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441517 | orchestrator | 2026-02-02 01:02:34.441521 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 01:02:34.441525 | orchestrator | Monday 02 February 2026 01:01:10 +0000 (0:00:00.132) 0:00:13.155 ******* 2026-02-02 01:02:34.441528 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441532 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.441536 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.441540 | orchestrator | 2026-02-02 01:02:34.441544 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-02 01:02:34.441548 | orchestrator | Monday 02 February 2026 01:01:11 +0000 (0:00:00.542) 0:00:13.697 ******* 2026-02-02 01:02:34.441552 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:02:34.441555 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:02:34.441559 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:02:34.441563 | orchestrator | 2026-02-02 01:02:34.441567 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-02 01:02:34.441571 | orchestrator | Monday 02 February 2026 01:01:13 +0000 (0:00:01.677) 0:00:15.375 ******* 2026-02-02 01:02:34.441575 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-02 01:02:34.441579 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-02 01:02:34.441582 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-02 01:02:34.441586 | orchestrator | 2026-02-02 01:02:34.441590 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-02 01:02:34.441594 | orchestrator | Monday 02 February 2026 01:01:15 +0000 (0:00:02.011) 0:00:17.386 ******* 2026-02-02 01:02:34.441598 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-02 01:02:34.441602 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-02 01:02:34.441609 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-02 01:02:34.441613 | orchestrator | 2026-02-02 01:02:34.441617 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-02 01:02:34.441621 | orchestrator | Monday 02 February 2026 01:01:17 +0000 (0:00:02.761) 0:00:20.147 ******* 2026-02-02 01:02:34.441624 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-02 01:02:34.441631 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-02 01:02:34.441635 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-02 01:02:34.441639 | orchestrator | 2026-02-02 01:02:34.441643 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-02 01:02:34.441649 | orchestrator | Monday 02 February 2026 01:01:20 +0000 (0:00:02.369) 0:00:22.517 ******* 2026-02-02 01:02:34.441653 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441657 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.441661 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.441664 | orchestrator | 2026-02-02 01:02:34.441668 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-02 01:02:34.441672 | orchestrator | Monday 02 February 2026 01:01:20 +0000 (0:00:00.389) 0:00:22.907 ******* 2026-02-02 01:02:34.441676 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441680 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.441683 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.441687 | orchestrator | 2026-02-02 01:02:34.441691 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-02 01:02:34.441695 | orchestrator | Monday 02 February 2026 01:01:20 +0000 (0:00:00.297) 0:00:23.204 ******* 2026-02-02 01:02:34.441699 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:02:34.441704 | orchestrator | 2026-02-02 01:02:34.441708 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-02 01:02:34.441713 | orchestrator | Monday 02 February 2026 01:01:21 +0000 (0:00:00.817) 0:00:24.021 ******* 2026-02-02 01:02:34.441720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 01:02:34.441734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 01:02:34.441748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 01:02:34.441756 | orchestrator | 2026-02-02 01:02:34.441761 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-02 01:02:34.441765 | orchestrator | Monday 02 February 2026 01:01:23 +0000 (0:00:01.829) 0:00:25.851 ******* 2026-02-02 01:02:34.441773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 01:02:34.441779 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 01:02:34.441795 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.441803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 01:02:34.441808 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.441813 | orchestrator | 2026-02-02 01:02:34.441817 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-02 01:02:34.441822 | orchestrator | Monday 02 February 2026 01:01:24 +0000 (0:00:00.674) 0:00:26.525 ******* 2026-02-02 01:02:34.441833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 01:02:34.441842 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.441865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 01:02:34.441874 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.441886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 01:02:34.441891 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.441896 | orchestrator | 2026-02-02 01:02:34.441900 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-02-02 01:02:34.441905 | orchestrator | Monday 02 February 2026 01:01:25 +0000 (0:00:00.841) 0:00:27.367 ******* 2026-02-02 01:02:34.441910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 01:02:34.441925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 01:02:34.441935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 01:02:34.441947 | orchestrator | 2026-02-02 01:02:34.441951 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-02-02 01:02:34.441956 | orchestrator | Monday 02 February 2026 01:01:26 +0000 (0:00:01.765) 0:00:29.133 ******* 2026-02-02 01:02:34.441961 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:02:34.441965 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:02:34.441970 | orchestrator | } 2026-02-02 01:02:34.441975 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:02:34.441979 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:02:34.441984 | orchestrator | } 2026-02-02 01:02:34.441989 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:02:34.441993 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:02:34.441997 | orchestrator | } 2026-02-02 01:02:34.442002 | orchestrator | 2026-02-02 01:02:34.442009 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:02:34.442041 | orchestrator | Monday 02 February 2026 01:01:27 +0000 (0:00:00.423) 0:00:29.556 ******* 2026-02-02 01:02:34.442046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 01:02:34.442055 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.442068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 01:02:34.442074 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.442079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 01:02:34.442087 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.442092 | orchestrator | 2026-02-02 01:02:34.442096 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-02 01:02:34.442100 | orchestrator | Monday 02 February 2026 01:01:28 +0000 (0:00:00.954) 0:00:30.511 ******* 2026-02-02 01:02:34.442104 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:02:34.442108 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:02:34.442111 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:02:34.442115 | orchestrator | 2026-02-02 01:02:34.442119 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-02 01:02:34.442123 | orchestrator | Monday 02 February 2026 01:01:28 +0000 (0:00:00.542) 0:00:31.054 ******* 2026-02-02 01:02:34.442127 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:02:34.442131 | orchestrator | 2026-02-02 01:02:34.442137 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-02 01:02:34.442141 | orchestrator | Monday 02 February 2026 01:01:29 +0000 (0:00:00.620) 0:00:31.675 ******* 2026-02-02 01:02:34.442145 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:02:34.442149 | orchestrator | 2026-02-02 01:02:34.442152 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-02 01:02:34.442156 | orchestrator | Monday 02 February 2026 01:01:31 +0000 (0:00:02.312) 0:00:33.987 ******* 2026-02-02 01:02:34.442160 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:02:34.442164 | orchestrator | 2026-02-02 01:02:34.442168 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-02 01:02:34.442172 | orchestrator | Monday 02 February 2026 01:01:34 +0000 (0:00:02.247) 0:00:36.235 ******* 2026-02-02 01:02:34.442176 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:02:34.442180 | orchestrator | 2026-02-02 01:02:34.442183 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-02 01:02:34.442187 | orchestrator | Monday 02 February 2026 01:01:51 +0000 (0:00:17.171) 0:00:53.407 ******* 2026-02-02 01:02:34.442192 | orchestrator | 2026-02-02 01:02:34.442202 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-02 01:02:34.442208 | orchestrator | Monday 02 February 2026 01:01:51 +0000 (0:00:00.068) 0:00:53.475 ******* 2026-02-02 01:02:34.442214 | orchestrator | 2026-02-02 01:02:34.442220 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-02 01:02:34.442226 | orchestrator | Monday 02 February 2026 01:01:51 +0000 (0:00:00.276) 0:00:53.752 ******* 2026-02-02 01:02:34.442232 | orchestrator | 2026-02-02 01:02:34.442239 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-02 01:02:34.442245 | orchestrator | Monday 02 February 2026 01:01:51 +0000 (0:00:00.070) 0:00:53.822 ******* 2026-02-02 01:02:34.442252 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:02:34.442258 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:02:34.442265 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:02:34.442271 | orchestrator | 2026-02-02 01:02:34.442277 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:02:34.442287 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-02-02 01:02:34.442291 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-02-02 01:02:34.442295 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-02-02 01:02:34.442298 | orchestrator | 2026-02-02 01:02:34.442302 | orchestrator | 2026-02-02 01:02:34.442306 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:02:34.442310 | orchestrator | Monday 02 February 2026 01:02:31 +0000 (0:00:39.937) 0:01:33.760 ******* 2026-02-02 01:02:34.442314 | orchestrator | =============================================================================== 2026-02-02 01:02:34.442318 | orchestrator | horizon : Restart horizon container ------------------------------------ 39.94s 2026-02-02 01:02:34.442321 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.17s 2026-02-02 01:02:34.442325 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.76s 2026-02-02 01:02:34.442329 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.37s 2026-02-02 01:02:34.442333 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.31s 2026-02-02 01:02:34.442337 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.25s 2026-02-02 01:02:34.442340 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.01s 2026-02-02 01:02:34.442344 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.83s 2026-02-02 01:02:34.442348 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.77s 2026-02-02 01:02:34.442351 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.68s 2026-02-02 01:02:34.442355 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.32s 2026-02-02 01:02:34.442359 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.95s 2026-02-02 01:02:34.442363 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.85s 2026-02-02 01:02:34.442367 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.84s 2026-02-02 01:02:34.442370 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.82s 2026-02-02 01:02:34.442374 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.67s 2026-02-02 01:02:34.442378 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-02-02 01:02:34.442382 | orchestrator | horizon : Update policy file name --------------------------------------- 0.59s 2026-02-02 01:02:34.442385 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-02-02 01:02:34.442389 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-02-02 01:02:34.442393 | orchestrator | 2026-02-02 01:02:34 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:02:34.443011 | orchestrator | 2026-02-02 01:02:34 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:34.443025 | orchestrator | 2026-02-02 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:37.484044 | orchestrator | 2026-02-02 01:02:37 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:02:37.484630 | orchestrator | 2026-02-02 01:02:37 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:37.485681 | orchestrator | 2026-02-02 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:40.523438 | orchestrator | 2026-02-02 01:02:40 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:02:40.524751 | orchestrator | 2026-02-02 01:02:40 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:40.524836 | orchestrator | 2026-02-02 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:43.568526 | orchestrator | 2026-02-02 01:02:43 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:02:43.569505 | orchestrator | 2026-02-02 01:02:43 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:43.569569 | orchestrator | 2026-02-02 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:46.613276 | orchestrator | 2026-02-02 01:02:46 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:02:46.614722 | orchestrator | 2026-02-02 01:02:46 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:46.614755 | orchestrator | 2026-02-02 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:49.668892 | orchestrator | 2026-02-02 01:02:49 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:02:49.670856 | orchestrator | 2026-02-02 01:02:49 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:49.670921 | orchestrator | 2026-02-02 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:52.712950 | orchestrator | 2026-02-02 01:02:52 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:02:52.714598 | orchestrator | 2026-02-02 01:02:52 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:52.714634 | orchestrator | 2026-02-02 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:55.761998 | orchestrator | 2026-02-02 01:02:55 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:02:55.764426 | orchestrator | 2026-02-02 01:02:55 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:55.764474 | orchestrator | 2026-02-02 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:02:58.806548 | orchestrator | 2026-02-02 01:02:58 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:02:58.808146 | orchestrator | 2026-02-02 01:02:58 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:02:58.808184 | orchestrator | 2026-02-02 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:01.848728 | orchestrator | 2026-02-02 01:03:01 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:03:01.849098 | orchestrator | 2026-02-02 01:03:01 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:03:01.849129 | orchestrator | 2026-02-02 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:04.889238 | orchestrator | 2026-02-02 01:03:04 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:03:04.889759 | orchestrator | 2026-02-02 01:03:04 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:03:04.889793 | orchestrator | 2026-02-02 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:07.934275 | orchestrator | 2026-02-02 01:03:07 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:03:07.936053 | orchestrator | 2026-02-02 01:03:07 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:03:07.936108 | orchestrator | 2026-02-02 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:10.978175 | orchestrator | 2026-02-02 01:03:10 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:03:10.979852 | orchestrator | 2026-02-02 01:03:10 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:03:10.979912 | orchestrator | 2026-02-02 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:14.031267 | orchestrator | 2026-02-02 01:03:14 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:03:14.032407 | orchestrator | 2026-02-02 01:03:14 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:03:14.032430 | orchestrator | 2026-02-02 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:17.071558 | orchestrator | 2026-02-02 01:03:17 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state STARTED 2026-02-02 01:03:17.073525 | orchestrator | 2026-02-02 01:03:17 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state STARTED 2026-02-02 01:03:17.073563 | orchestrator | 2026-02-02 01:03:17 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:20.143849 | orchestrator | 2026-02-02 01:03:20 | INFO  | Task de54f3e1-5f24-4102-aadd-6437c0482997 is in state SUCCESS 2026-02-02 01:03:20.147298 | orchestrator | 2026-02-02 01:03:20 | INFO  | Task 50cfbe91-e1a0-4fdb-a0eb-76a58db833b0 is in state SUCCESS 2026-02-02 01:03:20.148856 | orchestrator | 2026-02-02 01:03:20.148889 | orchestrator | 2026-02-02 01:03:20.148894 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-02 01:03:20.148899 | orchestrator | 2026-02-02 01:03:20.148905 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-02 01:03:20.148912 | orchestrator | Monday 02 February 2026 01:02:24 +0000 (0:00:00.239) 0:00:00.239 ******* 2026-02-02 01:03:20.148919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-02 01:03:20.148926 | orchestrator | 2026-02-02 01:03:20.148933 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-02 01:03:20.148939 | orchestrator | Monday 02 February 2026 01:02:24 +0000 (0:00:00.248) 0:00:00.488 ******* 2026-02-02 01:03:20.148962 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-02 01:03:20.148968 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-02 01:03:20.148972 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-02 01:03:20.148977 | orchestrator | 2026-02-02 01:03:20.148981 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-02 01:03:20.148985 | orchestrator | Monday 02 February 2026 01:02:26 +0000 (0:00:01.315) 0:00:01.803 ******* 2026-02-02 01:03:20.148989 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-02 01:03:20.148994 | orchestrator | 2026-02-02 01:03:20.148998 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-02 01:03:20.149043 | orchestrator | Monday 02 February 2026 01:02:27 +0000 (0:00:01.586) 0:00:03.389 ******* 2026-02-02 01:03:20.149048 | orchestrator | changed: [testbed-manager] 2026-02-02 01:03:20.149053 | orchestrator | 2026-02-02 01:03:20.149057 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-02 01:03:20.149061 | orchestrator | Monday 02 February 2026 01:02:28 +0000 (0:00:00.883) 0:00:04.273 ******* 2026-02-02 01:03:20.149065 | orchestrator | changed: [testbed-manager] 2026-02-02 01:03:20.149068 | orchestrator | 2026-02-02 01:03:20.149072 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-02 01:03:20.149078 | orchestrator | Monday 02 February 2026 01:02:29 +0000 (0:00:01.005) 0:00:05.279 ******* 2026-02-02 01:03:20.149106 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-02 01:03:20.149113 | orchestrator | ok: [testbed-manager] 2026-02-02 01:03:20.149119 | orchestrator | 2026-02-02 01:03:20.149205 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-02 01:03:20.149218 | orchestrator | Monday 02 February 2026 01:03:09 +0000 (0:00:40.189) 0:00:45.468 ******* 2026-02-02 01:03:20.149225 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-02 01:03:20.149232 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-02 01:03:20.149238 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-02 01:03:20.149245 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-02 01:03:20.149252 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-02 01:03:20.149257 | orchestrator | 2026-02-02 01:03:20.149264 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-02 01:03:20.149270 | orchestrator | Monday 02 February 2026 01:03:14 +0000 (0:00:04.206) 0:00:49.675 ******* 2026-02-02 01:03:20.149276 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-02 01:03:20.149280 | orchestrator | 2026-02-02 01:03:20.149283 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-02 01:03:20.149287 | orchestrator | Monday 02 February 2026 01:03:14 +0000 (0:00:00.465) 0:00:50.141 ******* 2026-02-02 01:03:20.149291 | orchestrator | skipping: [testbed-manager] 2026-02-02 01:03:20.149296 | orchestrator | 2026-02-02 01:03:20.149300 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-02 01:03:20.149303 | orchestrator | Monday 02 February 2026 01:03:14 +0000 (0:00:00.149) 0:00:50.290 ******* 2026-02-02 01:03:20.149307 | orchestrator | skipping: [testbed-manager] 2026-02-02 01:03:20.149311 | orchestrator | 2026-02-02 01:03:20.149315 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-02 01:03:20.149319 | orchestrator | Monday 02 February 2026 01:03:15 +0000 (0:00:00.544) 0:00:50.834 ******* 2026-02-02 01:03:20.149322 | orchestrator | changed: [testbed-manager] 2026-02-02 01:03:20.149326 | orchestrator | 2026-02-02 01:03:20.149330 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-02 01:03:20.149334 | orchestrator | Monday 02 February 2026 01:03:16 +0000 (0:00:01.434) 0:00:52.268 ******* 2026-02-02 01:03:20.149439 | orchestrator | changed: [testbed-manager] 2026-02-02 01:03:20.149445 | orchestrator | 2026-02-02 01:03:20.149449 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-02 01:03:20.149453 | orchestrator | Monday 02 February 2026 01:03:17 +0000 (0:00:00.724) 0:00:52.993 ******* 2026-02-02 01:03:20.149457 | orchestrator | changed: [testbed-manager] 2026-02-02 01:03:20.149461 | orchestrator | 2026-02-02 01:03:20.149465 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-02 01:03:20.149469 | orchestrator | Monday 02 February 2026 01:03:17 +0000 (0:00:00.636) 0:00:53.630 ******* 2026-02-02 01:03:20.149473 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-02 01:03:20.149477 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-02 01:03:20.149481 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-02 01:03:20.149485 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-02 01:03:20.149488 | orchestrator | 2026-02-02 01:03:20.149492 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:03:20.149503 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 01:03:20.149508 | orchestrator | 2026-02-02 01:03:20.149512 | orchestrator | 2026-02-02 01:03:20.149524 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:03:20.149531 | orchestrator | Monday 02 February 2026 01:03:19 +0000 (0:00:01.541) 0:00:55.171 ******* 2026-02-02 01:03:20.149537 | orchestrator | =============================================================================== 2026-02-02 01:03:20.149543 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.19s 2026-02-02 01:03:20.149558 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.21s 2026-02-02 01:03:20.149565 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.59s 2026-02-02 01:03:20.149571 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.54s 2026-02-02 01:03:20.149577 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.43s 2026-02-02 01:03:20.149583 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.32s 2026-02-02 01:03:20.149587 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.01s 2026-02-02 01:03:20.149591 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.88s 2026-02-02 01:03:20.149595 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.72s 2026-02-02 01:03:20.149598 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2026-02-02 01:03:20.149602 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.54s 2026-02-02 01:03:20.149606 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2026-02-02 01:03:20.149610 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-02-02 01:03:20.149613 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-02-02 01:03:20.149617 | orchestrator | 2026-02-02 01:03:20.149621 | orchestrator | 2026-02-02 01:03:20.149625 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:03:20.149629 | orchestrator | 2026-02-02 01:03:20.149633 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:03:20.149636 | orchestrator | Monday 02 February 2026 01:00:58 +0000 (0:00:00.287) 0:00:00.287 ******* 2026-02-02 01:03:20.149640 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:03:20.149644 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:03:20.149648 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:03:20.149652 | orchestrator | 2026-02-02 01:03:20.149656 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:03:20.149660 | orchestrator | Monday 02 February 2026 01:00:58 +0000 (0:00:00.298) 0:00:00.586 ******* 2026-02-02 01:03:20.149663 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-02 01:03:20.149667 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-02 01:03:20.149672 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-02 01:03:20.149678 | orchestrator | 2026-02-02 01:03:20.149684 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-02 01:03:20.149690 | orchestrator | 2026-02-02 01:03:20.149696 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-02 01:03:20.149702 | orchestrator | Monday 02 February 2026 01:00:58 +0000 (0:00:00.458) 0:00:01.044 ******* 2026-02-02 01:03:20.149709 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:03:20.149716 | orchestrator | 2026-02-02 01:03:20.149723 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-02 01:03:20.149729 | orchestrator | Monday 02 February 2026 01:00:59 +0000 (0:00:00.572) 0:00:01.617 ******* 2026-02-02 01:03:20.149740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.149759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.149831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.149843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 01:03:20.149851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 01:03:20.149857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 01:03:20.149872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.149885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.149891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.149897 | orchestrator | 2026-02-02 01:03:20.149903 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-02 01:03:20.149909 | orchestrator | Monday 02 February 2026 01:01:01 +0000 (0:00:01.889) 0:00:03.506 ******* 2026-02-02 01:03:20.149915 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.149922 | orchestrator | 2026-02-02 01:03:20.149928 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-02 01:03:20.149935 | orchestrator | Monday 02 February 2026 01:01:01 +0000 (0:00:00.165) 0:00:03.672 ******* 2026-02-02 01:03:20.149941 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.149948 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:03:20.149954 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:03:20.149960 | orchestrator | 2026-02-02 01:03:20.149967 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-02 01:03:20.149971 | orchestrator | Monday 02 February 2026 01:01:01 +0000 (0:00:00.474) 0:00:04.146 ******* 2026-02-02 01:03:20.149975 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 01:03:20.149979 | orchestrator | 2026-02-02 01:03:20.149983 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-02 01:03:20.149987 | orchestrator | Monday 02 February 2026 01:01:02 +0000 (0:00:00.928) 0:00:05.074 ******* 2026-02-02 01:03:20.149991 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:03:20.149995 | orchestrator | 2026-02-02 01:03:20.149999 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-02 01:03:20.150003 | orchestrator | Monday 02 February 2026 01:01:03 +0000 (0:00:00.595) 0:00:05.670 ******* 2026-02-02 01:03:20.150007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.150096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.150105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.150110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150149 | orchestrator | 2026-02-02 01:03:20.150154 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-02 01:03:20.150158 | orchestrator | Monday 02 February 2026 01:01:06 +0000 (0:00:03.504) 0:00:09.175 ******* 2026-02-02 01:03:20.150163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 01:03:20.150171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.150176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 01:03:20.150180 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.150192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 01:03:20.150198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.150202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 01:03:20.150207 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:03:20.150211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 01:03:20.150219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.150224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 01:03:20.150231 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:03:20.150235 | orchestrator | 2026-02-02 01:03:20.150242 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-02 01:03:20.150247 | orchestrator | Monday 02 February 2026 01:01:07 +0000 (0:00:00.601) 0:00:09.777 ******* 2026-02-02 01:03:20.150251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 01:03:20.150256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.150266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 01:03:20.150273 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.150280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 01:03:20.150293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.150300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 01:03:20.150307 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:03:20.150314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 01:03:20.150326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.150333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 01:03:20.150340 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:03:20.150347 | orchestrator | 2026-02-02 01:03:20.150354 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-02 01:03:20.150359 | orchestrator | Monday 02 February 2026 01:01:08 +0000 (0:00:00.879) 0:00:10.657 ******* 2026-02-02 01:03:20.150371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.150377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.150385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.150391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150431 | orchestrator | 2026-02-02 01:03:20.150435 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-02 01:03:20.150440 | orchestrator | Monday 02 February 2026 01:01:12 +0000 (0:00:03.769) 0:00:14.427 ******* 2026-02-02 01:03:20.150444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.150449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.150458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.150463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.150470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.150474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.150478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.150499 | orchestrator | 2026-02-02 01:03:20.150503 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-02 01:03:20.150507 | orchestrator | Monday 02 February 2026 01:01:18 +0000 (0:00:05.847) 0:00:20.274 ******* 2026-02-02 01:03:20.150511 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:03:20.150515 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:03:20.150519 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:03:20.150523 | orchestrator | 2026-02-02 01:03:20.150527 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-02 01:03:20.150531 | orchestrator | Monday 02 February 2026 01:01:19 +0000 (0:00:01.628) 0:00:21.903 ******* 2026-02-02 01:03:20.150534 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.150538 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:03:20.150542 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:03:20.150546 | orchestrator | 2026-02-02 01:03:20.150550 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-02 01:03:20.150554 | orchestrator | Monday 02 February 2026 01:01:20 +0000 (0:00:00.586) 0:00:22.489 ******* 2026-02-02 01:03:20.150557 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.150561 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:03:20.150565 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:03:20.150569 | orchestrator | 2026-02-02 01:03:20.150573 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-02 01:03:20.150576 | orchestrator | Monday 02 February 2026 01:01:20 +0000 (0:00:00.345) 0:00:22.835 ******* 2026-02-02 01:03:20.150580 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.150584 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:03:20.150588 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:03:20.150592 | orchestrator | 2026-02-02 01:03:20.150595 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-02 01:03:20.150599 | orchestrator | Monday 02 February 2026 01:01:21 +0000 (0:00:00.537) 0:00:23.372 ******* 2026-02-02 01:03:20.150603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 01:03:20.150608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.150617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 01:03:20.150625 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.150629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 01:03:20.150634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.150638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 01:03:20.150642 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:03:20.150646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 01:03:20.150658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.150662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 01:03:20.150666 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:03:20.150670 | orchestrator | 2026-02-02 01:03:20.150674 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-02 01:03:20.150678 | orchestrator | Monday 02 February 2026 01:01:21 +0000 (0:00:00.588) 0:00:23.961 ******* 2026-02-02 01:03:20.150682 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.150685 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:03:20.150689 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:03:20.150693 | orchestrator | 2026-02-02 01:03:20.150697 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-02 01:03:20.150701 | orchestrator | Monday 02 February 2026 01:01:22 +0000 (0:00:00.331) 0:00:24.292 ******* 2026-02-02 01:03:20.150704 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-02 01:03:20.150709 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-02 01:03:20.150713 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-02 01:03:20.150716 | orchestrator | 2026-02-02 01:03:20.150720 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-02 01:03:20.150724 | orchestrator | Monday 02 February 2026 01:01:23 +0000 (0:00:01.725) 0:00:26.018 ******* 2026-02-02 01:03:20.150728 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 01:03:20.150732 | orchestrator | 2026-02-02 01:03:20.150736 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-02 01:03:20.150739 | orchestrator | Monday 02 February 2026 01:01:24 +0000 (0:00:01.095) 0:00:27.113 ******* 2026-02-02 01:03:20.150743 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.150747 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:03:20.150751 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:03:20.150755 | orchestrator | 2026-02-02 01:03:20.150759 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-02 01:03:20.150762 | orchestrator | Monday 02 February 2026 01:01:26 +0000 (0:00:01.080) 0:00:28.193 ******* 2026-02-02 01:03:20.150766 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 01:03:20.150770 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-02 01:03:20.150774 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-02 01:03:20.150778 | orchestrator | 2026-02-02 01:03:20.150781 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-02 01:03:20.150785 | orchestrator | Monday 02 February 2026 01:01:27 +0000 (0:00:01.301) 0:00:29.495 ******* 2026-02-02 01:03:20.150789 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:03:20.150822 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:03:20.150827 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:03:20.150835 | orchestrator | 2026-02-02 01:03:20.150839 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-02 01:03:20.150843 | orchestrator | Monday 02 February 2026 01:01:27 +0000 (0:00:00.358) 0:00:29.854 ******* 2026-02-02 01:03:20.150847 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-02 01:03:20.150851 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-02 01:03:20.150855 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-02 01:03:20.150858 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-02 01:03:20.150862 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-02 01:03:20.150866 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-02 01:03:20.150870 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-02 01:03:20.150874 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-02 01:03:20.150878 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-02 01:03:20.150881 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-02 01:03:20.150885 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-02 01:03:20.150889 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-02 01:03:20.150895 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-02 01:03:20.150903 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-02 01:03:20.150907 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-02 01:03:20.150911 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-02 01:03:20.150915 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-02 01:03:20.150919 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-02 01:03:20.150923 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-02 01:03:20.150926 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-02 01:03:20.150930 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-02 01:03:20.150934 | orchestrator | 2026-02-02 01:03:20.150938 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-02 01:03:20.150942 | orchestrator | Monday 02 February 2026 01:01:37 +0000 (0:00:09.549) 0:00:39.403 ******* 2026-02-02 01:03:20.150946 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-02 01:03:20.150949 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-02 01:03:20.150953 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-02 01:03:20.150957 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-02 01:03:20.150961 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-02 01:03:20.150965 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-02 01:03:20.150969 | orchestrator | 2026-02-02 01:03:20.150972 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-02-02 01:03:20.150979 | orchestrator | Monday 02 February 2026 01:01:40 +0000 (0:00:02.904) 0:00:42.308 ******* 2026-02-02 01:03:20.150983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.150988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.152146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 01:03:20.152230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 01:03:20.152239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 01:03:20.152252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 01:03:20.152257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.152262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.152276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 01:03:20.152281 | orchestrator | 2026-02-02 01:03:20.152286 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-02-02 01:03:20.152291 | orchestrator | Monday 02 February 2026 01:01:42 +0000 (0:00:02.447) 0:00:44.755 ******* 2026-02-02 01:03:20.152296 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:03:20.152302 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:03:20.152309 | orchestrator | } 2026-02-02 01:03:20.152315 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:03:20.152320 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:03:20.152326 | orchestrator | } 2026-02-02 01:03:20.152332 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:03:20.152338 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:03:20.152345 | orchestrator | } 2026-02-02 01:03:20.152350 | orchestrator | 2026-02-02 01:03:20.152356 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:03:20.152369 | orchestrator | Monday 02 February 2026 01:01:42 +0000 (0:00:00.355) 0:00:45.110 ******* 2026-02-02 01:03:20.152374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 01:03:20.152379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.152383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 01:03:20.152387 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.152399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 01:03:20.152403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.152414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 01:03:20.152418 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:03:20.152422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 01:03:20.152426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 01:03:20.152430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 01:03:20.152434 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:03:20.152438 | orchestrator | 2026-02-02 01:03:20.152442 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-02 01:03:20.152446 | orchestrator | Monday 02 February 2026 01:01:43 +0000 (0:00:00.980) 0:00:46.090 ******* 2026-02-02 01:03:20.152450 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.152456 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:03:20.152462 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:03:20.152466 | orchestrator | 2026-02-02 01:03:20.152469 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-02 01:03:20.152473 | orchestrator | Monday 02 February 2026 01:01:44 +0000 (0:00:00.347) 0:00:46.438 ******* 2026-02-02 01:03:20.152477 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:03:20.152485 | orchestrator | 2026-02-02 01:03:20.152489 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-02 01:03:20.152492 | orchestrator | Monday 02 February 2026 01:01:46 +0000 (0:00:02.287) 0:00:48.726 ******* 2026-02-02 01:03:20.152496 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:03:20.152500 | orchestrator | 2026-02-02 01:03:20.152504 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-02 01:03:20.152507 | orchestrator | Monday 02 February 2026 01:01:48 +0000 (0:00:02.167) 0:00:50.893 ******* 2026-02-02 01:03:20.152511 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:03:20.152516 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:03:20.152519 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:03:20.152523 | orchestrator | 2026-02-02 01:03:20.152527 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-02 01:03:20.152531 | orchestrator | Monday 02 February 2026 01:01:49 +0000 (0:00:00.981) 0:00:51.875 ******* 2026-02-02 01:03:20.152534 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:03:20.152538 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:03:20.152542 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:03:20.152546 | orchestrator | 2026-02-02 01:03:20.152549 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-02 01:03:20.152553 | orchestrator | Monday 02 February 2026 01:01:50 +0000 (0:00:00.319) 0:00:52.195 ******* 2026-02-02 01:03:20.152557 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.152561 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:03:20.152565 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:03:20.152569 | orchestrator | 2026-02-02 01:03:20.152572 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-02 01:03:20.152576 | orchestrator | Monday 02 February 2026 01:01:50 +0000 (0:00:00.640) 0:00:52.835 ******* 2026-02-02 01:03:20.152580 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:03:20.152584 | orchestrator | 2026-02-02 01:03:20.152587 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-02 01:03:20.152591 | orchestrator | Monday 02 February 2026 01:02:05 +0000 (0:00:15.148) 0:01:07.983 ******* 2026-02-02 01:03:20.152595 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:03:20.152599 | orchestrator | 2026-02-02 01:03:20.152603 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-02 01:03:20.152606 | orchestrator | Monday 02 February 2026 01:02:17 +0000 (0:00:11.502) 0:01:19.486 ******* 2026-02-02 01:03:20.152610 | orchestrator | 2026-02-02 01:03:20.152614 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-02 01:03:20.152618 | orchestrator | Monday 02 February 2026 01:02:17 +0000 (0:00:00.066) 0:01:19.552 ******* 2026-02-02 01:03:20.152621 | orchestrator | 2026-02-02 01:03:20.152625 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-02 01:03:20.152629 | orchestrator | Monday 02 February 2026 01:02:17 +0000 (0:00:00.090) 0:01:19.643 ******* 2026-02-02 01:03:20.152633 | orchestrator | 2026-02-02 01:03:20.152637 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-02 01:03:20.152640 | orchestrator | Monday 02 February 2026 01:02:17 +0000 (0:00:00.070) 0:01:19.713 ******* 2026-02-02 01:03:20.152644 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:03:20.152648 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:03:20.152652 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:03:20.152655 | orchestrator | 2026-02-02 01:03:20.152659 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-02 01:03:20.152663 | orchestrator | Monday 02 February 2026 01:02:26 +0000 (0:00:09.007) 0:01:28.721 ******* 2026-02-02 01:03:20.152667 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:03:20.152670 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:03:20.152674 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:03:20.152678 | orchestrator | 2026-02-02 01:03:20.152682 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-02 01:03:20.152691 | orchestrator | Monday 02 February 2026 01:02:36 +0000 (0:00:09.871) 0:01:38.592 ******* 2026-02-02 01:03:20.152694 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:03:20.152698 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:03:20.152702 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:03:20.152706 | orchestrator | 2026-02-02 01:03:20.152709 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-02 01:03:20.152713 | orchestrator | Monday 02 February 2026 01:02:46 +0000 (0:00:10.045) 0:01:48.638 ******* 2026-02-02 01:03:20.152717 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:03:20.152721 | orchestrator | 2026-02-02 01:03:20.152725 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-02 01:03:20.152729 | orchestrator | Monday 02 February 2026 01:02:47 +0000 (0:00:00.606) 0:01:49.244 ******* 2026-02-02 01:03:20.152732 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:03:20.152736 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:03:20.152740 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:03:20.152744 | orchestrator | 2026-02-02 01:03:20.152747 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-02 01:03:20.152751 | orchestrator | Monday 02 February 2026 01:02:48 +0000 (0:00:01.316) 0:01:50.561 ******* 2026-02-02 01:03:20.152755 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:03:20.152759 | orchestrator | 2026-02-02 01:03:20.152763 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-02 01:03:20.152766 | orchestrator | Monday 02 February 2026 01:02:50 +0000 (0:00:01.769) 0:01:52.330 ******* 2026-02-02 01:03:20.152770 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-02 01:03:20.152774 | orchestrator | 2026-02-02 01:03:20.152778 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting services] ************* 2026-02-02 01:03:20.152782 | orchestrator | Monday 02 February 2026 01:03:02 +0000 (0:00:12.607) 0:02:04.938 ******* 2026-02-02 01:03:20.152790 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-02 01:03:20.152811 | orchestrator | 2026-02-02 01:03:20.152818 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting endpoints] ************ 2026-02-02 01:03:20.152822 | orchestrator | Monday 02 February 2026 01:03:07 +0000 (0:00:04.319) 0:02:09.258 ******* 2026-02-02 01:03:20.152826 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-02 01:03:20.152830 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-02 01:03:20.152834 | orchestrator | 2026-02-02 01:03:20.152838 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-02 01:03:20.152842 | orchestrator | Monday 02 February 2026 01:03:14 +0000 (0:00:07.113) 0:02:16.371 ******* 2026-02-02 01:03:20.152845 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.152849 | orchestrator | 2026-02-02 01:03:20.152853 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-02 01:03:20.152857 | orchestrator | Monday 02 February 2026 01:03:14 +0000 (0:00:00.136) 0:02:16.507 ******* 2026-02-02 01:03:20.152860 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.152865 | orchestrator | 2026-02-02 01:03:20.152869 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-02 01:03:20.152874 | orchestrator | Monday 02 February 2026 01:03:14 +0000 (0:00:00.116) 0:02:16.624 ******* 2026-02-02 01:03:20.152878 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.152883 | orchestrator | 2026-02-02 01:03:20.152887 | orchestrator | TASK [service-ks-register : keystone | Granting/revoking user roles] *********** 2026-02-02 01:03:20.152892 | orchestrator | Monday 02 February 2026 01:03:14 +0000 (0:00:00.142) 0:02:16.767 ******* 2026-02-02 01:03:20.152896 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.152900 | orchestrator | 2026-02-02 01:03:20.152905 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-02 01:03:20.152909 | orchestrator | Monday 02 February 2026 01:03:14 +0000 (0:00:00.346) 0:02:17.113 ******* 2026-02-02 01:03:20.152918 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:03:20.152923 | orchestrator | 2026-02-02 01:03:20.152927 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-02 01:03:20.152932 | orchestrator | Monday 02 February 2026 01:03:18 +0000 (0:00:03.556) 0:02:20.670 ******* 2026-02-02 01:03:20.152937 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:03:20.152941 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:03:20.152945 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:03:20.152950 | orchestrator | 2026-02-02 01:03:20.152954 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:03:20.152959 | orchestrator | testbed-node-0 : ok=34  changed=20  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-02 01:03:20.152965 | orchestrator | testbed-node-1 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-02 01:03:20.152969 | orchestrator | testbed-node-2 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-02 01:03:20.152974 | orchestrator | 2026-02-02 01:03:20.152978 | orchestrator | 2026-02-02 01:03:20.152983 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:03:20.152987 | orchestrator | Monday 02 February 2026 01:03:18 +0000 (0:00:00.456) 0:02:21.127 ******* 2026-02-02 01:03:20.152992 | orchestrator | =============================================================================== 2026-02-02 01:03:20.152996 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.15s 2026-02-02 01:03:20.153000 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.61s 2026-02-02 01:03:20.153004 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.50s 2026-02-02 01:03:20.153009 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.05s 2026-02-02 01:03:20.153013 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.87s 2026-02-02 01:03:20.153018 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.55s 2026-02-02 01:03:20.153022 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 9.01s 2026-02-02 01:03:20.153026 | orchestrator | service-ks-register : keystone | Creating/deleting endpoints ------------ 7.11s 2026-02-02 01:03:20.153031 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.85s 2026-02-02 01:03:20.153035 | orchestrator | service-ks-register : keystone | Creating/deleting services ------------- 4.32s 2026-02-02 01:03:20.153040 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.77s 2026-02-02 01:03:20.153044 | orchestrator | keystone : Creating default user role ----------------------------------- 3.56s 2026-02-02 01:03:20.153049 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.51s 2026-02-02 01:03:20.153053 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.90s 2026-02-02 01:03:20.153057 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.45s 2026-02-02 01:03:20.153061 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.29s 2026-02-02 01:03:20.153066 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.17s 2026-02-02 01:03:20.153070 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.89s 2026-02-02 01:03:20.153075 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.77s 2026-02-02 01:03:20.153081 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.73s 2026-02-02 01:03:20.153089 | orchestrator | 2026-02-02 01:03:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:03:23.201480 | orchestrator | 2026-02-02 01:03:23 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:03:23.201617 | orchestrator | 2026-02-02 01:03:23 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:03:23.202857 | orchestrator | 2026-02-02 01:03:23 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:03:23.203631 | orchestrator | 2026-02-02 01:03:23 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:03:23.204627 | orchestrator | 2026-02-02 01:03:23 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:03:23.204703 | orchestrator | 2026-02-02 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:26.245036 | orchestrator | 2026-02-02 01:03:26 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:03:26.245142 | orchestrator | 2026-02-02 01:03:26 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:03:26.245168 | orchestrator | 2026-02-02 01:03:26 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:03:26.245560 | orchestrator | 2026-02-02 01:03:26 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:03:26.246318 | orchestrator | 2026-02-02 01:03:26 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:03:26.246362 | orchestrator | 2026-02-02 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:29.285324 | orchestrator | 2026-02-02 01:03:29 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:03:29.289141 | orchestrator | 2026-02-02 01:03:29 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:03:29.291275 | orchestrator | 2026-02-02 01:03:29 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:03:29.294585 | orchestrator | 2026-02-02 01:03:29 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:03:29.295024 | orchestrator | 2026-02-02 01:03:29 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:03:29.295037 | orchestrator | 2026-02-02 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:32.362659 | orchestrator | 2026-02-02 01:03:32 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:03:32.366124 | orchestrator | 2026-02-02 01:03:32 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:03:32.368631 | orchestrator | 2026-02-02 01:03:32 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:03:32.370221 | orchestrator | 2026-02-02 01:03:32 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:03:32.371395 | orchestrator | 2026-02-02 01:03:32 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:03:32.371723 | orchestrator | 2026-02-02 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:35.427197 | orchestrator | 2026-02-02 01:03:35 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:03:35.429368 | orchestrator | 2026-02-02 01:03:35 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:03:35.430551 | orchestrator | 2026-02-02 01:03:35 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:03:35.432112 | orchestrator | 2026-02-02 01:03:35 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:03:35.434109 | orchestrator | 2026-02-02 01:03:35 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:03:35.434167 | orchestrator | 2026-02-02 01:03:35 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:38.481613 | orchestrator | 2026-02-02 01:03:38 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:03:38.483689 | orchestrator | 2026-02-02 01:03:38 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:03:38.486094 | orchestrator | 2026-02-02 01:03:38 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:03:38.490080 | orchestrator | 2026-02-02 01:03:38 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:03:38.493547 | orchestrator | 2026-02-02 01:03:38 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:03:38.493588 | orchestrator | 2026-02-02 01:03:38 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:41.534351 | orchestrator | 2026-02-02 01:03:41 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:03:41.536769 | orchestrator | 2026-02-02 01:03:41 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:03:41.538231 | orchestrator | 2026-02-02 01:03:41 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:03:41.539802 | orchestrator | 2026-02-02 01:03:41 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:03:41.541604 | orchestrator | 2026-02-02 01:03:41 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:03:41.541632 | orchestrator | 2026-02-02 01:03:41 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:44.585055 | orchestrator | 2026-02-02 01:03:44 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:03:44.587304 | orchestrator | 2026-02-02 01:03:44 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:03:44.589203 | orchestrator | 2026-02-02 01:03:44 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:03:44.591257 | orchestrator | 2026-02-02 01:03:44 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:03:44.592852 | orchestrator | 2026-02-02 01:03:44 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:03:44.592902 | orchestrator | 2026-02-02 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:47.644071 | orchestrator | 2026-02-02 01:03:47 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:03:47.648872 | orchestrator | 2026-02-02 01:03:47 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:03:47.651966 | orchestrator | 2026-02-02 01:03:47 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:03:47.654433 | orchestrator | 2026-02-02 01:03:47 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:03:47.657401 | orchestrator | 2026-02-02 01:03:47 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:03:47.657452 | orchestrator | 2026-02-02 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:50.702887 | orchestrator | 2026-02-02 01:03:50 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:03:50.704500 | orchestrator | 2026-02-02 01:03:50 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:03:50.706618 | orchestrator | 2026-02-02 01:03:50 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:03:50.708020 | orchestrator | 2026-02-02 01:03:50 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:03:50.709837 | orchestrator | 2026-02-02 01:03:50 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:03:50.709873 | orchestrator | 2026-02-02 01:03:50 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:53.753664 | orchestrator | 2026-02-02 01:03:53 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:03:53.756386 | orchestrator | 2026-02-02 01:03:53 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:03:53.757150 | orchestrator | 2026-02-02 01:03:53 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:03:53.758947 | orchestrator | 2026-02-02 01:03:53 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:03:53.760191 | orchestrator | 2026-02-02 01:03:53 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:03:53.760240 | orchestrator | 2026-02-02 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:56.811496 | orchestrator | 2026-02-02 01:03:56 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:03:56.813247 | orchestrator | 2026-02-02 01:03:56 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:03:56.815447 | orchestrator | 2026-02-02 01:03:56 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:03:56.815701 | orchestrator | 2026-02-02 01:03:56 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:03:56.817207 | orchestrator | 2026-02-02 01:03:56 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:03:56.817257 | orchestrator | 2026-02-02 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:03:59.863069 | orchestrator | 2026-02-02 01:03:59 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:03:59.864996 | orchestrator | 2026-02-02 01:03:59 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:03:59.867662 | orchestrator | 2026-02-02 01:03:59 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:03:59.869811 | orchestrator | 2026-02-02 01:03:59 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:03:59.871452 | orchestrator | 2026-02-02 01:03:59 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:03:59.871492 | orchestrator | 2026-02-02 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:02.924595 | orchestrator | 2026-02-02 01:04:02 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:02.924698 | orchestrator | 2026-02-02 01:04:02 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:02.924714 | orchestrator | 2026-02-02 01:04:02 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:04:02.924726 | orchestrator | 2026-02-02 01:04:02 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:02.924738 | orchestrator | 2026-02-02 01:04:02 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:02.924805 | orchestrator | 2026-02-02 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:05.960779 | orchestrator | 2026-02-02 01:04:05 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:05.961087 | orchestrator | 2026-02-02 01:04:05 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:05.961990 | orchestrator | 2026-02-02 01:04:05 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:04:05.962867 | orchestrator | 2026-02-02 01:04:05 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:05.963629 | orchestrator | 2026-02-02 01:04:05 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:05.963717 | orchestrator | 2026-02-02 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:09.013677 | orchestrator | 2026-02-02 01:04:09 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:09.013784 | orchestrator | 2026-02-02 01:04:09 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:09.013798 | orchestrator | 2026-02-02 01:04:09 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:04:09.013807 | orchestrator | 2026-02-02 01:04:09 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:09.013815 | orchestrator | 2026-02-02 01:04:09 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:09.013823 | orchestrator | 2026-02-02 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:12.044201 | orchestrator | 2026-02-02 01:04:12 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:12.044588 | orchestrator | 2026-02-02 01:04:12 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:12.045304 | orchestrator | 2026-02-02 01:04:12 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:04:12.045912 | orchestrator | 2026-02-02 01:04:12 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:12.046665 | orchestrator | 2026-02-02 01:04:12 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:12.046700 | orchestrator | 2026-02-02 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:15.088267 | orchestrator | 2026-02-02 01:04:15 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:15.089909 | orchestrator | 2026-02-02 01:04:15 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:15.092042 | orchestrator | 2026-02-02 01:04:15 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:04:15.093731 | orchestrator | 2026-02-02 01:04:15 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:15.095577 | orchestrator | 2026-02-02 01:04:15 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:15.095659 | orchestrator | 2026-02-02 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:18.140510 | orchestrator | 2026-02-02 01:04:18 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:18.141889 | orchestrator | 2026-02-02 01:04:18 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:18.141934 | orchestrator | 2026-02-02 01:04:18 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:04:18.143034 | orchestrator | 2026-02-02 01:04:18 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:18.143803 | orchestrator | 2026-02-02 01:04:18 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:18.143863 | orchestrator | 2026-02-02 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:21.221286 | orchestrator | 2026-02-02 01:04:21 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:21.221571 | orchestrator | 2026-02-02 01:04:21 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:21.222354 | orchestrator | 2026-02-02 01:04:21 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:04:21.223307 | orchestrator | 2026-02-02 01:04:21 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:21.224088 | orchestrator | 2026-02-02 01:04:21 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:21.224103 | orchestrator | 2026-02-02 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:24.266869 | orchestrator | 2026-02-02 01:04:24 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:24.267160 | orchestrator | 2026-02-02 01:04:24 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:24.270291 | orchestrator | 2026-02-02 01:04:24 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:04:24.270963 | orchestrator | 2026-02-02 01:04:24 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:24.272051 | orchestrator | 2026-02-02 01:04:24 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:24.272095 | orchestrator | 2026-02-02 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:27.295612 | orchestrator | 2026-02-02 01:04:27 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:27.296024 | orchestrator | 2026-02-02 01:04:27 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:27.297021 | orchestrator | 2026-02-02 01:04:27 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:04:27.297481 | orchestrator | 2026-02-02 01:04:27 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:27.298153 | orchestrator | 2026-02-02 01:04:27 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:27.298969 | orchestrator | 2026-02-02 01:04:27 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:30.326338 | orchestrator | 2026-02-02 01:04:30 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:30.326528 | orchestrator | 2026-02-02 01:04:30 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:30.327178 | orchestrator | 2026-02-02 01:04:30 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:04:30.327865 | orchestrator | 2026-02-02 01:04:30 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:30.328471 | orchestrator | 2026-02-02 01:04:30 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:30.328580 | orchestrator | 2026-02-02 01:04:30 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:33.365647 | orchestrator | 2026-02-02 01:04:33 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:33.365910 | orchestrator | 2026-02-02 01:04:33 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:33.366536 | orchestrator | 2026-02-02 01:04:33 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:04:33.367299 | orchestrator | 2026-02-02 01:04:33 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:33.367856 | orchestrator | 2026-02-02 01:04:33 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:33.367897 | orchestrator | 2026-02-02 01:04:33 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:36.393863 | orchestrator | 2026-02-02 01:04:36 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:36.394400 | orchestrator | 2026-02-02 01:04:36 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:36.395455 | orchestrator | 2026-02-02 01:04:36 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state STARTED 2026-02-02 01:04:36.396025 | orchestrator | 2026-02-02 01:04:36 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:36.396877 | orchestrator | 2026-02-02 01:04:36 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:36.396924 | orchestrator | 2026-02-02 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:39.417021 | orchestrator | 2026-02-02 01:04:39 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:39.417382 | orchestrator | 2026-02-02 01:04:39 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:39.418176 | orchestrator | 2026-02-02 01:04:39 | INFO  | Task a25c89ac-fae9-4f80-bb3d-730848ddce69 is in state SUCCESS 2026-02-02 01:04:39.419695 | orchestrator | 2026-02-02 01:04:39 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:39.422942 | orchestrator | 2026-02-02 01:04:39 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:39.423000 | orchestrator | 2026-02-02 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:42.450662 | orchestrator | 2026-02-02 01:04:42 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:42.450803 | orchestrator | 2026-02-02 01:04:42 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:42.451273 | orchestrator | 2026-02-02 01:04:42 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:04:42.451743 | orchestrator | 2026-02-02 01:04:42 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:42.452567 | orchestrator | 2026-02-02 01:04:42 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:42.452629 | orchestrator | 2026-02-02 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:45.476360 | orchestrator | 2026-02-02 01:04:45 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:45.476471 | orchestrator | 2026-02-02 01:04:45 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:45.477054 | orchestrator | 2026-02-02 01:04:45 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:04:45.477632 | orchestrator | 2026-02-02 01:04:45 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:45.478232 | orchestrator | 2026-02-02 01:04:45 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:45.478325 | orchestrator | 2026-02-02 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:48.509651 | orchestrator | 2026-02-02 01:04:48 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:48.509908 | orchestrator | 2026-02-02 01:04:48 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:48.510662 | orchestrator | 2026-02-02 01:04:48 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:04:48.511360 | orchestrator | 2026-02-02 01:04:48 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:48.512057 | orchestrator | 2026-02-02 01:04:48 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:48.512178 | orchestrator | 2026-02-02 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:51.540042 | orchestrator | 2026-02-02 01:04:51 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:51.540268 | orchestrator | 2026-02-02 01:04:51 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:51.542002 | orchestrator | 2026-02-02 01:04:51 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:04:51.542596 | orchestrator | 2026-02-02 01:04:51 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:51.543432 | orchestrator | 2026-02-02 01:04:51 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:51.543463 | orchestrator | 2026-02-02 01:04:51 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:54.577553 | orchestrator | 2026-02-02 01:04:54 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:54.578009 | orchestrator | 2026-02-02 01:04:54 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state STARTED 2026-02-02 01:04:54.580236 | orchestrator | 2026-02-02 01:04:54 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:04:54.580354 | orchestrator | 2026-02-02 01:04:54 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:54.580382 | orchestrator | 2026-02-02 01:04:54 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:54.580401 | orchestrator | 2026-02-02 01:04:54 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:04:57.617888 | orchestrator | 2026-02-02 01:04:57 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:04:57.617963 | orchestrator | 2026-02-02 01:04:57 | INFO  | Task c58d2939-61d4-4afa-868f-8055659cb503 is in state SUCCESS 2026-02-02 01:04:57.618862 | orchestrator | 2026-02-02 01:04:57 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:04:57.619967 | orchestrator | 2026-02-02 01:04:57 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:04:57.620929 | orchestrator | 2026-02-02 01:04:57 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:04:57.620954 | orchestrator | 2026-02-02 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:00.652977 | orchestrator | 2026-02-02 01:05:00 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:00.653124 | orchestrator | 2026-02-02 01:05:00 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:00.653512 | orchestrator | 2026-02-02 01:05:00 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:05:00.654654 | orchestrator | 2026-02-02 01:05:00 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:00.654685 | orchestrator | 2026-02-02 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:03.680987 | orchestrator | 2026-02-02 01:05:03 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:03.681585 | orchestrator | 2026-02-02 01:05:03 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:03.683406 | orchestrator | 2026-02-02 01:05:03 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:05:03.684066 | orchestrator | 2026-02-02 01:05:03 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:03.684090 | orchestrator | 2026-02-02 01:05:03 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:06.713614 | orchestrator | 2026-02-02 01:05:06 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:06.714875 | orchestrator | 2026-02-02 01:05:06 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:06.715614 | orchestrator | 2026-02-02 01:05:06 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:05:06.716245 | orchestrator | 2026-02-02 01:05:06 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:06.716296 | orchestrator | 2026-02-02 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:09.743645 | orchestrator | 2026-02-02 01:05:09 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:09.745688 | orchestrator | 2026-02-02 01:05:09 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:09.746406 | orchestrator | 2026-02-02 01:05:09 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:05:09.746959 | orchestrator | 2026-02-02 01:05:09 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:09.747000 | orchestrator | 2026-02-02 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:12.772538 | orchestrator | 2026-02-02 01:05:12 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:12.772615 | orchestrator | 2026-02-02 01:05:12 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:12.773322 | orchestrator | 2026-02-02 01:05:12 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:05:12.773961 | orchestrator | 2026-02-02 01:05:12 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:12.773995 | orchestrator | 2026-02-02 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:15.806335 | orchestrator | 2026-02-02 01:05:15 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:15.806619 | orchestrator | 2026-02-02 01:05:15 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:15.807842 | orchestrator | 2026-02-02 01:05:15 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:05:15.808884 | orchestrator | 2026-02-02 01:05:15 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:15.808925 | orchestrator | 2026-02-02 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:18.834497 | orchestrator | 2026-02-02 01:05:18 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:18.834762 | orchestrator | 2026-02-02 01:05:18 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:18.835301 | orchestrator | 2026-02-02 01:05:18 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:05:18.835940 | orchestrator | 2026-02-02 01:05:18 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:18.835984 | orchestrator | 2026-02-02 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:21.866466 | orchestrator | 2026-02-02 01:05:21 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:21.867472 | orchestrator | 2026-02-02 01:05:21 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:21.868488 | orchestrator | 2026-02-02 01:05:21 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:05:21.869273 | orchestrator | 2026-02-02 01:05:21 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:21.869465 | orchestrator | 2026-02-02 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:24.907032 | orchestrator | 2026-02-02 01:05:24 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:24.907233 | orchestrator | 2026-02-02 01:05:24 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:24.907907 | orchestrator | 2026-02-02 01:05:24 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:05:24.908555 | orchestrator | 2026-02-02 01:05:24 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:24.908576 | orchestrator | 2026-02-02 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:27.948169 | orchestrator | 2026-02-02 01:05:27 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:27.948479 | orchestrator | 2026-02-02 01:05:27 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:27.949192 | orchestrator | 2026-02-02 01:05:27 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state STARTED 2026-02-02 01:05:27.949731 | orchestrator | 2026-02-02 01:05:27 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:27.949756 | orchestrator | 2026-02-02 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:30.981063 | orchestrator | 2026-02-02 01:05:30 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:30.981402 | orchestrator | 2026-02-02 01:05:30 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:30.984485 | orchestrator | 2026-02-02 01:05:30 | INFO  | Task 96f8da3e-3c72-46d3-80ab-f219ac64d3d1 is in state SUCCESS 2026-02-02 01:05:30.985800 | orchestrator | 2026-02-02 01:05:30.985835 | orchestrator | 2026-02-02 01:05:30.985848 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-02-02 01:05:30.985860 | orchestrator | 2026-02-02 01:05:30.985885 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-02-02 01:05:30.985897 | orchestrator | Monday 02 February 2026 01:03:25 +0000 (0:00:00.118) 0:00:00.118 ******* 2026-02-02 01:05:30.985909 | orchestrator | changed: [localhost] 2026-02-02 01:05:30.985920 | orchestrator | 2026-02-02 01:05:30.985932 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-02-02 01:05:30.985943 | orchestrator | Monday 02 February 2026 01:03:26 +0000 (0:00:01.261) 0:00:01.379 ******* 2026-02-02 01:05:30.985953 | orchestrator | changed: [localhost] 2026-02-02 01:05:30.985965 | orchestrator | 2026-02-02 01:05:30.985976 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-02-02 01:05:30.985987 | orchestrator | Monday 02 February 2026 01:04:06 +0000 (0:00:39.484) 0:00:40.864 ******* 2026-02-02 01:05:30.985998 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-02-02 01:05:30.986009 | orchestrator | changed: [localhost] 2026-02-02 01:05:30.986069 | orchestrator | 2026-02-02 01:05:30.986094 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:05:30.986176 | orchestrator | 2026-02-02 01:05:30.986197 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:05:30.986406 | orchestrator | Monday 02 February 2026 01:04:35 +0000 (0:00:29.225) 0:01:10.090 ******* 2026-02-02 01:05:30.986435 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:05:30.986454 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:05:30.986467 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:05:30.986480 | orchestrator | 2026-02-02 01:05:30.986493 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:05:30.986505 | orchestrator | Monday 02 February 2026 01:04:36 +0000 (0:00:00.487) 0:01:10.577 ******* 2026-02-02 01:05:30.986518 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-02-02 01:05:30.986531 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-02-02 01:05:30.986543 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-02-02 01:05:30.986556 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-02-02 01:05:30.986568 | orchestrator | 2026-02-02 01:05:30.986581 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-02-02 01:05:30.986593 | orchestrator | skipping: no hosts matched 2026-02-02 01:05:30.986607 | orchestrator | 2026-02-02 01:05:30.986619 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:05:30.986632 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:05:30.986646 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:05:30.986660 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:05:30.986673 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:05:30.986709 | orchestrator | 2026-02-02 01:05:30.986723 | orchestrator | 2026-02-02 01:05:30.986737 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:05:30.986750 | orchestrator | Monday 02 February 2026 01:04:37 +0000 (0:00:01.653) 0:01:12.231 ******* 2026-02-02 01:05:30.986760 | orchestrator | =============================================================================== 2026-02-02 01:05:30.986771 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 39.48s 2026-02-02 01:05:30.986782 | orchestrator | Download ironic-agent kernel ------------------------------------------- 29.23s 2026-02-02 01:05:30.986793 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.66s 2026-02-02 01:05:30.986804 | orchestrator | Ensure the destination directory exists --------------------------------- 1.26s 2026-02-02 01:05:30.986815 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2026-02-02 01:05:30.986826 | orchestrator | 2026-02-02 01:05:30.986837 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-02 01:05:30.986849 | orchestrator | 2.16.14 2026-02-02 01:05:30.986860 | orchestrator | 2026-02-02 01:05:30.986872 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-02 01:05:30.986883 | orchestrator | 2026-02-02 01:05:30.986894 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-02 01:05:30.986905 | orchestrator | Monday 02 February 2026 01:03:24 +0000 (0:00:00.242) 0:00:00.242 ******* 2026-02-02 01:05:30.986916 | orchestrator | changed: [testbed-manager] 2026-02-02 01:05:30.986927 | orchestrator | 2026-02-02 01:05:30.986938 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-02 01:05:30.986949 | orchestrator | Monday 02 February 2026 01:03:26 +0000 (0:00:01.555) 0:00:01.797 ******* 2026-02-02 01:05:30.986960 | orchestrator | changed: [testbed-manager] 2026-02-02 01:05:30.986970 | orchestrator | 2026-02-02 01:05:30.986981 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-02 01:05:30.986993 | orchestrator | Monday 02 February 2026 01:03:26 +0000 (0:00:00.895) 0:00:02.693 ******* 2026-02-02 01:05:30.987013 | orchestrator | changed: [testbed-manager] 2026-02-02 01:05:30.987024 | orchestrator | 2026-02-02 01:05:30.987035 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-02 01:05:30.987045 | orchestrator | Monday 02 February 2026 01:03:27 +0000 (0:00:00.901) 0:00:03.594 ******* 2026-02-02 01:05:30.987056 | orchestrator | changed: [testbed-manager] 2026-02-02 01:05:30.987067 | orchestrator | 2026-02-02 01:05:30.987078 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-02 01:05:30.987136 | orchestrator | Monday 02 February 2026 01:03:28 +0000 (0:00:01.109) 0:00:04.703 ******* 2026-02-02 01:05:30.987157 | orchestrator | changed: [testbed-manager] 2026-02-02 01:05:30.987191 | orchestrator | 2026-02-02 01:05:30.987210 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-02 01:05:30.987232 | orchestrator | Monday 02 February 2026 01:03:30 +0000 (0:00:01.051) 0:00:05.755 ******* 2026-02-02 01:05:30.987244 | orchestrator | changed: [testbed-manager] 2026-02-02 01:05:30.987254 | orchestrator | 2026-02-02 01:05:30.987266 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-02 01:05:30.987276 | orchestrator | Monday 02 February 2026 01:03:31 +0000 (0:00:01.344) 0:00:07.100 ******* 2026-02-02 01:05:30.987287 | orchestrator | changed: [testbed-manager] 2026-02-02 01:05:30.987298 | orchestrator | 2026-02-02 01:05:30.987309 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-02 01:05:30.987319 | orchestrator | Monday 02 February 2026 01:03:33 +0000 (0:00:02.042) 0:00:09.143 ******* 2026-02-02 01:05:30.987330 | orchestrator | changed: [testbed-manager] 2026-02-02 01:05:30.987341 | orchestrator | 2026-02-02 01:05:30.987352 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-02 01:05:30.987362 | orchestrator | Monday 02 February 2026 01:03:34 +0000 (0:00:01.290) 0:00:10.433 ******* 2026-02-02 01:05:30.987373 | orchestrator | changed: [testbed-manager] 2026-02-02 01:05:30.987384 | orchestrator | 2026-02-02 01:05:30.987394 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-02 01:05:30.987405 | orchestrator | Monday 02 February 2026 01:04:31 +0000 (0:00:57.120) 0:01:07.554 ******* 2026-02-02 01:05:30.987416 | orchestrator | skipping: [testbed-manager] 2026-02-02 01:05:30.987427 | orchestrator | 2026-02-02 01:05:30.987438 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-02 01:05:30.987448 | orchestrator | 2026-02-02 01:05:30.987459 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-02 01:05:30.987470 | orchestrator | Monday 02 February 2026 01:04:32 +0000 (0:00:00.196) 0:01:07.750 ******* 2026-02-02 01:05:30.987480 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:05:30.987491 | orchestrator | 2026-02-02 01:05:30.987502 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-02 01:05:30.987513 | orchestrator | 2026-02-02 01:05:30.987523 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-02 01:05:30.987534 | orchestrator | Monday 02 February 2026 01:04:43 +0000 (0:00:11.392) 0:01:19.143 ******* 2026-02-02 01:05:30.987545 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:05:30.987556 | orchestrator | 2026-02-02 01:05:30.987567 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-02 01:05:30.987577 | orchestrator | 2026-02-02 01:05:30.987588 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-02 01:05:30.987599 | orchestrator | Monday 02 February 2026 01:04:54 +0000 (0:00:11.249) 0:01:30.393 ******* 2026-02-02 01:05:30.987610 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:05:30.987620 | orchestrator | 2026-02-02 01:05:30.987632 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:05:30.987642 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 01:05:30.987654 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:05:30.987672 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:05:30.987754 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:05:30.987766 | orchestrator | 2026-02-02 01:05:30.987776 | orchestrator | 2026-02-02 01:05:30.987787 | orchestrator | 2026-02-02 01:05:30.987798 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:05:30.987809 | orchestrator | Monday 02 February 2026 01:04:55 +0000 (0:00:01.224) 0:01:31.617 ******* 2026-02-02 01:05:30.987820 | orchestrator | =============================================================================== 2026-02-02 01:05:30.987831 | orchestrator | Create admin user ------------------------------------------------------ 57.12s 2026-02-02 01:05:30.987841 | orchestrator | Restart ceph manager service ------------------------------------------- 23.87s 2026-02-02 01:05:30.987852 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.04s 2026-02-02 01:05:30.987863 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.56s 2026-02-02 01:05:30.987874 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.34s 2026-02-02 01:05:30.987884 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.29s 2026-02-02 01:05:30.987895 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.11s 2026-02-02 01:05:30.987905 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.05s 2026-02-02 01:05:30.987916 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.90s 2026-02-02 01:05:30.987927 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.90s 2026-02-02 01:05:30.987937 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.20s 2026-02-02 01:05:30.987948 | orchestrator | 2026-02-02 01:05:30.987959 | orchestrator | 2026-02-02 01:05:30.987970 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:05:30.987980 | orchestrator | 2026-02-02 01:05:30.987991 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:05:30.988002 | orchestrator | Monday 02 February 2026 01:03:25 +0000 (0:00:00.250) 0:00:00.250 ******* 2026-02-02 01:05:30.988013 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:05:30.988024 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:05:30.988034 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:05:30.988045 | orchestrator | 2026-02-02 01:05:30.988065 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:05:30.988076 | orchestrator | Monday 02 February 2026 01:03:25 +0000 (0:00:00.406) 0:00:00.656 ******* 2026-02-02 01:05:30.988093 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-02 01:05:30.988104 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-02 01:05:30.988115 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-02 01:05:30.988126 | orchestrator | 2026-02-02 01:05:30.988137 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-02 01:05:30.988148 | orchestrator | 2026-02-02 01:05:30.988158 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-02 01:05:30.988169 | orchestrator | Monday 02 February 2026 01:03:26 +0000 (0:00:00.508) 0:00:01.164 ******* 2026-02-02 01:05:30.988179 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:05:30.988189 | orchestrator | 2026-02-02 01:05:30.988198 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-02-02 01:05:30.988208 | orchestrator | Monday 02 February 2026 01:03:26 +0000 (0:00:00.621) 0:00:01.786 ******* 2026-02-02 01:05:30.988217 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-02 01:05:30.988233 | orchestrator | 2026-02-02 01:05:30.988243 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting endpoints] ************ 2026-02-02 01:05:30.988252 | orchestrator | Monday 02 February 2026 01:03:30 +0000 (0:00:04.053) 0:00:05.839 ******* 2026-02-02 01:05:30.988262 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-02 01:05:30.988272 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-02 01:05:30.988281 | orchestrator | 2026-02-02 01:05:30.988291 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-02 01:05:30.988301 | orchestrator | Monday 02 February 2026 01:03:37 +0000 (0:00:06.799) 0:00:12.638 ******* 2026-02-02 01:05:30.988310 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-02 01:05:30.988320 | orchestrator | 2026-02-02 01:05:30.988330 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-02 01:05:30.988339 | orchestrator | Monday 02 February 2026 01:03:41 +0000 (0:00:03.466) 0:00:16.104 ******* 2026-02-02 01:05:30.988349 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-02 01:05:30.988358 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 01:05:30.988368 | orchestrator | 2026-02-02 01:05:30.988377 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-02 01:05:30.988387 | orchestrator | Monday 02 February 2026 01:03:45 +0000 (0:00:03.990) 0:00:20.095 ******* 2026-02-02 01:05:30.988397 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 01:05:30.988406 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-02 01:05:30.988416 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-02 01:05:30.988425 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-02 01:05:30.988435 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-02 01:05:30.988445 | orchestrator | 2026-02-02 01:05:30.988454 | orchestrator | TASK [service-ks-register : barbican | Granting/revoking user roles] *********** 2026-02-02 01:05:30.988464 | orchestrator | Monday 02 February 2026 01:04:01 +0000 (0:00:16.031) 0:00:36.126 ******* 2026-02-02 01:05:30.988473 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-02 01:05:30.988483 | orchestrator | 2026-02-02 01:05:30.988492 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-02 01:05:30.988502 | orchestrator | Monday 02 February 2026 01:04:05 +0000 (0:00:04.246) 0:00:40.373 ******* 2026-02-02 01:05:30.988519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.988546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.988565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.988577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.988588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.988599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.988615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.988635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.988646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.988655 | orchestrator | 2026-02-02 01:05:30.988665 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-02 01:05:30.988714 | orchestrator | Monday 02 February 2026 01:04:07 +0000 (0:00:02.380) 0:00:42.754 ******* 2026-02-02 01:05:30.988734 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-02 01:05:30.988751 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-02 01:05:30.988763 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-02 01:05:30.988772 | orchestrator | 2026-02-02 01:05:30.988782 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-02 01:05:30.988792 | orchestrator | Monday 02 February 2026 01:04:09 +0000 (0:00:02.179) 0:00:44.934 ******* 2026-02-02 01:05:30.988801 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:05:30.988811 | orchestrator | 2026-02-02 01:05:30.988820 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-02 01:05:30.988830 | orchestrator | Monday 02 February 2026 01:04:10 +0000 (0:00:00.176) 0:00:45.111 ******* 2026-02-02 01:05:30.988840 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:05:30.988849 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:05:30.988859 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:05:30.988869 | orchestrator | 2026-02-02 01:05:30.988878 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-02 01:05:30.988888 | orchestrator | Monday 02 February 2026 01:04:10 +0000 (0:00:00.413) 0:00:45.525 ******* 2026-02-02 01:05:30.988898 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:05:30.988908 | orchestrator | 2026-02-02 01:05:30.988917 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-02 01:05:30.988927 | orchestrator | Monday 02 February 2026 01:04:11 +0000 (0:00:01.059) 0:00:46.584 ******* 2026-02-02 01:05:30.988938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.988968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.988981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.988992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.989003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.989014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.989043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.989058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.989068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.989078 | orchestrator | 2026-02-02 01:05:30.989088 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-02 01:05:30.989098 | orchestrator | Monday 02 February 2026 01:04:15 +0000 (0:00:03.826) 0:00:50.410 ******* 2026-02-02 01:05:30.989109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:05:30.989120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.989247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.989259 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:05:30.989284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:05:30.989297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.989307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.989317 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:05:30.989328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:05:30.989345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.989838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.989869 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:05:30.989880 | orchestrator | 2026-02-02 01:05:30.989890 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-02 01:05:30.989900 | orchestrator | Monday 02 February 2026 01:04:16 +0000 (0:00:00.934) 0:00:51.345 ******* 2026-02-02 01:05:30.989912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:05:30.989923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:05:30.989944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.989955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.989978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.989989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.990000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:05:30.990054 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:05:30.990076 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:05:30.990091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.990108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.990118 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:05:30.990128 | orchestrator | 2026-02-02 01:05:30.990138 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-02 01:05:30.990148 | orchestrator | Monday 02 February 2026 01:04:18 +0000 (0:00:01.991) 0:00:53.337 ******* 2026-02-02 01:05:30.990170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.990182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.990194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.990211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.990221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.990241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.990251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.990262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.990272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.990287 | orchestrator | 2026-02-02 01:05:30.990297 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-02 01:05:30.990307 | orchestrator | Monday 02 February 2026 01:04:22 +0000 (0:00:04.293) 0:00:57.630 ******* 2026-02-02 01:05:30.990317 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:05:30.990327 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:05:30.990340 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:05:30.990357 | orchestrator | 2026-02-02 01:05:30.990367 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-02 01:05:30.990377 | orchestrator | Monday 02 February 2026 01:04:26 +0000 (0:00:03.492) 0:01:01.122 ******* 2026-02-02 01:05:30.990386 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 01:05:30.990396 | orchestrator | 2026-02-02 01:05:30.990406 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-02 01:05:30.990416 | orchestrator | Monday 02 February 2026 01:04:27 +0000 (0:00:01.529) 0:01:02.651 ******* 2026-02-02 01:05:30.990425 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:05:30.990482 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:05:30.990493 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:05:30.990502 | orchestrator | 2026-02-02 01:05:30.990512 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-02 01:05:30.990522 | orchestrator | Monday 02 February 2026 01:04:28 +0000 (0:00:00.641) 0:01:03.293 ******* 2026-02-02 01:05:30.990540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.990557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.990574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.990586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.990596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.990612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.990630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.990640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.990657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.990667 | orchestrator | 2026-02-02 01:05:30.990708 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-02 01:05:30.990720 | orchestrator | Monday 02 February 2026 01:04:39 +0000 (0:00:11.454) 0:01:14.748 ******* 2026-02-02 01:05:30.990731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:05:30.990742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.990764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.990775 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:05:30.990786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:05:30.990803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.990814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:05:30.990825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.990845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.990856 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:05:30.990866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.990882 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:05:30.990891 | orchestrator | 2026-02-02 01:05:30.990901 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-02-02 01:05:30.990911 | orchestrator | Monday 02 February 2026 01:04:40 +0000 (0:00:00.582) 0:01:15.330 ******* 2026-02-02 01:05:30.990922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.990933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.990950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:05:30.990965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.990980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.990991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.991000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.991010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.991021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:05:30.991031 | orchestrator | 2026-02-02 01:05:30.991040 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-02-02 01:05:30.991050 | orchestrator | Monday 02 February 2026 01:04:45 +0000 (0:00:05.130) 0:01:20.461 ******* 2026-02-02 01:05:30.991060 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:05:30.991075 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:05:30.991090 | orchestrator | } 2026-02-02 01:05:30.991100 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:05:30.991110 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:05:30.991120 | orchestrator | } 2026-02-02 01:05:30.991129 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:05:30.991143 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:05:30.991153 | orchestrator | } 2026-02-02 01:05:30.991162 | orchestrator | 2026-02-02 01:05:30.991172 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:05:30.991182 | orchestrator | Monday 02 February 2026 01:04:46 +0000 (0:00:00.772) 0:01:21.233 ******* 2026-02-02 01:05:30.991192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:05:30.991203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.991214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.991224 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:05:30.991234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:05:30.991260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.991271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.991281 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:05:30.991292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:05:30.991302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.991313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:05:30.991323 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:05:30.991333 | orchestrator | 2026-02-02 01:05:30.991342 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-02 01:05:30.991358 | orchestrator | Monday 02 February 2026 01:04:47 +0000 (0:00:01.205) 0:01:22.439 ******* 2026-02-02 01:05:30.991368 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:05:30.991378 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:05:30.991387 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:05:30.991397 | orchestrator | 2026-02-02 01:05:30.991407 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-02 01:05:30.991416 | orchestrator | Monday 02 February 2026 01:04:47 +0000 (0:00:00.441) 0:01:22.881 ******* 2026-02-02 01:05:30.991426 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:05:30.991436 | orchestrator | 2026-02-02 01:05:30.991445 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-02 01:05:30.991455 | orchestrator | Monday 02 February 2026 01:04:50 +0000 (0:00:02.113) 0:01:24.994 ******* 2026-02-02 01:05:30.991464 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:05:30.991474 | orchestrator | 2026-02-02 01:05:30.991484 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-02 01:05:30.991498 | orchestrator | Monday 02 February 2026 01:04:52 +0000 (0:00:02.255) 0:01:27.249 ******* 2026-02-02 01:05:30.991508 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:05:30.991518 | orchestrator | 2026-02-02 01:05:30.991532 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-02 01:05:30.991542 | orchestrator | Monday 02 February 2026 01:05:03 +0000 (0:00:11.417) 0:01:38.667 ******* 2026-02-02 01:05:30.991552 | orchestrator | 2026-02-02 01:05:30.991561 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-02 01:05:30.991571 | orchestrator | Monday 02 February 2026 01:05:03 +0000 (0:00:00.061) 0:01:38.728 ******* 2026-02-02 01:05:30.991581 | orchestrator | 2026-02-02 01:05:30.991590 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-02 01:05:30.991600 | orchestrator | Monday 02 February 2026 01:05:03 +0000 (0:00:00.094) 0:01:38.823 ******* 2026-02-02 01:05:30.991609 | orchestrator | 2026-02-02 01:05:30.991619 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-02 01:05:30.991628 | orchestrator | Monday 02 February 2026 01:05:03 +0000 (0:00:00.124) 0:01:38.948 ******* 2026-02-02 01:05:30.991638 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:05:30.991648 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:05:30.991657 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:05:30.991667 | orchestrator | 2026-02-02 01:05:30.991697 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-02 01:05:30.991715 | orchestrator | Monday 02 February 2026 01:05:10 +0000 (0:00:06.787) 0:01:45.735 ******* 2026-02-02 01:05:30.991733 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:05:30.991750 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:05:30.991765 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:05:30.991774 | orchestrator | 2026-02-02 01:05:30.991784 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-02 01:05:30.991796 | orchestrator | Monday 02 February 2026 01:05:21 +0000 (0:00:10.296) 0:01:56.032 ******* 2026-02-02 01:05:30.991812 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:05:30.991822 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:05:30.991835 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:05:30.991849 | orchestrator | 2026-02-02 01:05:30.991859 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:05:30.991869 | orchestrator | testbed-node-0 : ok=25  changed=20  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-02 01:05:30.991879 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 01:05:30.991889 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 01:05:30.991899 | orchestrator | 2026-02-02 01:05:30.991916 | orchestrator | 2026-02-02 01:05:30.991926 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:05:30.991942 | orchestrator | Monday 02 February 2026 01:05:29 +0000 (0:00:08.249) 0:02:04.282 ******* 2026-02-02 01:05:30.991953 | orchestrator | =============================================================================== 2026-02-02 01:05:30.991962 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.03s 2026-02-02 01:05:30.991972 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.45s 2026-02-02 01:05:30.991981 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.42s 2026-02-02 01:05:30.991991 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.30s 2026-02-02 01:05:30.992001 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.25s 2026-02-02 01:05:30.992010 | orchestrator | service-ks-register : barbican | Creating/deleting endpoints ------------ 6.80s 2026-02-02 01:05:30.992020 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.79s 2026-02-02 01:05:30.992029 | orchestrator | service-check-containers : barbican | Check containers ------------------ 5.13s 2026-02-02 01:05:30.992039 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.29s 2026-02-02 01:05:30.992048 | orchestrator | service-ks-register : barbican | Granting/revoking user roles ----------- 4.25s 2026-02-02 01:05:30.992058 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------- 4.05s 2026-02-02 01:05:30.992068 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.99s 2026-02-02 01:05:30.992077 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.83s 2026-02-02 01:05:30.992087 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.49s 2026-02-02 01:05:30.992096 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.47s 2026-02-02 01:05:30.992106 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.38s 2026-02-02 01:05:30.992116 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.26s 2026-02-02 01:05:30.992125 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.18s 2026-02-02 01:05:30.992135 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.11s 2026-02-02 01:05:30.992149 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.99s 2026-02-02 01:05:30.992165 | orchestrator | 2026-02-02 01:05:30 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:30.992179 | orchestrator | 2026-02-02 01:05:30 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:05:30.992202 | orchestrator | 2026-02-02 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:34.053325 | orchestrator | 2026-02-02 01:05:34 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:34.054105 | orchestrator | 2026-02-02 01:05:34 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:34.054808 | orchestrator | 2026-02-02 01:05:34 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:34.056090 | orchestrator | 2026-02-02 01:05:34 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:05:34.056126 | orchestrator | 2026-02-02 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:37.080503 | orchestrator | 2026-02-02 01:05:37 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:37.081552 | orchestrator | 2026-02-02 01:05:37 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:37.082203 | orchestrator | 2026-02-02 01:05:37 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:37.082617 | orchestrator | 2026-02-02 01:05:37 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:05:37.082655 | orchestrator | 2026-02-02 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:40.105815 | orchestrator | 2026-02-02 01:05:40 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:40.105871 | orchestrator | 2026-02-02 01:05:40 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:40.106389 | orchestrator | 2026-02-02 01:05:40 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:40.107187 | orchestrator | 2026-02-02 01:05:40 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:05:40.107201 | orchestrator | 2026-02-02 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:43.132262 | orchestrator | 2026-02-02 01:05:43 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:43.133472 | orchestrator | 2026-02-02 01:05:43 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:43.134125 | orchestrator | 2026-02-02 01:05:43 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:43.134833 | orchestrator | 2026-02-02 01:05:43 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:05:43.134891 | orchestrator | 2026-02-02 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:46.160302 | orchestrator | 2026-02-02 01:05:46 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:46.161083 | orchestrator | 2026-02-02 01:05:46 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:46.162824 | orchestrator | 2026-02-02 01:05:46 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:46.163886 | orchestrator | 2026-02-02 01:05:46 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:05:46.163920 | orchestrator | 2026-02-02 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:49.205029 | orchestrator | 2026-02-02 01:05:49 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:49.208957 | orchestrator | 2026-02-02 01:05:49 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:49.209828 | orchestrator | 2026-02-02 01:05:49 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:49.215251 | orchestrator | 2026-02-02 01:05:49 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:05:49.215311 | orchestrator | 2026-02-02 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:52.266978 | orchestrator | 2026-02-02 01:05:52 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:52.267115 | orchestrator | 2026-02-02 01:05:52 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:52.267497 | orchestrator | 2026-02-02 01:05:52 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:52.268705 | orchestrator | 2026-02-02 01:05:52 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:05:52.268789 | orchestrator | 2026-02-02 01:05:52 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:55.310106 | orchestrator | 2026-02-02 01:05:55 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:55.310722 | orchestrator | 2026-02-02 01:05:55 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:55.313700 | orchestrator | 2026-02-02 01:05:55 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:55.314755 | orchestrator | 2026-02-02 01:05:55 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:05:55.314805 | orchestrator | 2026-02-02 01:05:55 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:05:58.357484 | orchestrator | 2026-02-02 01:05:58 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:05:58.357600 | orchestrator | 2026-02-02 01:05:58 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:05:58.359772 | orchestrator | 2026-02-02 01:05:58 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:05:58.360245 | orchestrator | 2026-02-02 01:05:58 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:05:58.360346 | orchestrator | 2026-02-02 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:01.399043 | orchestrator | 2026-02-02 01:06:01 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:01.400547 | orchestrator | 2026-02-02 01:06:01 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:06:01.403322 | orchestrator | 2026-02-02 01:06:01 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:06:01.404879 | orchestrator | 2026-02-02 01:06:01 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:01.404924 | orchestrator | 2026-02-02 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:04.450281 | orchestrator | 2026-02-02 01:06:04 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:04.450357 | orchestrator | 2026-02-02 01:06:04 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:06:04.450467 | orchestrator | 2026-02-02 01:06:04 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:06:04.451873 | orchestrator | 2026-02-02 01:06:04 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:04.451976 | orchestrator | 2026-02-02 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:07.489009 | orchestrator | 2026-02-02 01:06:07 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:07.492175 | orchestrator | 2026-02-02 01:06:07 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state STARTED 2026-02-02 01:06:07.494492 | orchestrator | 2026-02-02 01:06:07 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:06:07.497946 | orchestrator | 2026-02-02 01:06:07 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:07.498089 | orchestrator | 2026-02-02 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:10.539103 | orchestrator | 2026-02-02 01:06:10 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:10.540006 | orchestrator | 2026-02-02 01:06:10 | INFO  | Task 9b4b2f51-9600-4d5e-b2e7-75499a6dfdbd is in state SUCCESS 2026-02-02 01:06:10.542546 | orchestrator | 2026-02-02 01:06:10.542591 | orchestrator | 2026-02-02 01:06:10.542602 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:06:10.542613 | orchestrator | 2026-02-02 01:06:10.542622 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:06:10.542637 | orchestrator | Monday 02 February 2026 01:04:47 +0000 (0:00:00.474) 0:00:00.474 ******* 2026-02-02 01:06:10.542868 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:06:10.542891 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:06:10.542907 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:06:10.542922 | orchestrator | 2026-02-02 01:06:10.542937 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:06:10.542952 | orchestrator | Monday 02 February 2026 01:04:47 +0000 (0:00:00.326) 0:00:00.801 ******* 2026-02-02 01:06:10.542967 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-02 01:06:10.542983 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-02 01:06:10.542993 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-02 01:06:10.543002 | orchestrator | 2026-02-02 01:06:10.543011 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-02 01:06:10.543019 | orchestrator | 2026-02-02 01:06:10.543028 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-02 01:06:10.543037 | orchestrator | Monday 02 February 2026 01:04:47 +0000 (0:00:00.383) 0:00:01.185 ******* 2026-02-02 01:06:10.543046 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:06:10.543056 | orchestrator | 2026-02-02 01:06:10.543065 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-02-02 01:06:10.543088 | orchestrator | Monday 02 February 2026 01:04:48 +0000 (0:00:00.946) 0:00:02.131 ******* 2026-02-02 01:06:10.543097 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-02 01:06:10.543106 | orchestrator | 2026-02-02 01:06:10.543115 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-02-02 01:06:10.543124 | orchestrator | Monday 02 February 2026 01:04:52 +0000 (0:00:03.470) 0:00:05.601 ******* 2026-02-02 01:06:10.543132 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-02 01:06:10.543141 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-02 01:06:10.543151 | orchestrator | 2026-02-02 01:06:10.543160 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-02 01:06:10.543169 | orchestrator | Monday 02 February 2026 01:04:58 +0000 (0:00:06.335) 0:00:11.937 ******* 2026-02-02 01:06:10.543178 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 01:06:10.543187 | orchestrator | 2026-02-02 01:06:10.543196 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-02 01:06:10.543204 | orchestrator | Monday 02 February 2026 01:05:01 +0000 (0:00:03.047) 0:00:14.984 ******* 2026-02-02 01:06:10.543213 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-02 01:06:10.543222 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 01:06:10.543231 | orchestrator | 2026-02-02 01:06:10.543240 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-02 01:06:10.543248 | orchestrator | Monday 02 February 2026 01:05:05 +0000 (0:00:03.824) 0:00:18.809 ******* 2026-02-02 01:06:10.543257 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 01:06:10.543266 | orchestrator | 2026-02-02 01:06:10.543275 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-02-02 01:06:10.543283 | orchestrator | Monday 02 February 2026 01:05:09 +0000 (0:00:03.643) 0:00:22.453 ******* 2026-02-02 01:06:10.543292 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-02 01:06:10.543301 | orchestrator | 2026-02-02 01:06:10.543309 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-02 01:06:10.543318 | orchestrator | Monday 02 February 2026 01:05:13 +0000 (0:00:03.801) 0:00:26.254 ******* 2026-02-02 01:06:10.543327 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:10.543336 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:06:10.543349 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:06:10.543363 | orchestrator | 2026-02-02 01:06:10.543388 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-02 01:06:10.543403 | orchestrator | Monday 02 February 2026 01:05:13 +0000 (0:00:00.410) 0:00:26.664 ******* 2026-02-02 01:06:10.543441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.543458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.543476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.543487 | orchestrator | 2026-02-02 01:06:10.543498 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-02 01:06:10.543509 | orchestrator | Monday 02 February 2026 01:05:14 +0000 (0:00:01.314) 0:00:27.979 ******* 2026-02-02 01:06:10.543520 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:10.543531 | orchestrator | 2026-02-02 01:06:10.543541 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-02 01:06:10.543551 | orchestrator | Monday 02 February 2026 01:05:15 +0000 (0:00:00.297) 0:00:28.276 ******* 2026-02-02 01:06:10.543561 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:10.543571 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:06:10.543588 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:06:10.543598 | orchestrator | 2026-02-02 01:06:10.543612 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-02 01:06:10.543627 | orchestrator | Monday 02 February 2026 01:05:16 +0000 (0:00:01.078) 0:00:29.355 ******* 2026-02-02 01:06:10.543641 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:06:10.543703 | orchestrator | 2026-02-02 01:06:10.543719 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-02 01:06:10.543734 | orchestrator | Monday 02 February 2026 01:05:16 +0000 (0:00:00.466) 0:00:29.821 ******* 2026-02-02 01:06:10.543752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.543783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.543810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.543827 | orchestrator | 2026-02-02 01:06:10.543912 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-02 01:06:10.543983 | orchestrator | Monday 02 February 2026 01:05:18 +0000 (0:00:01.712) 0:00:31.534 ******* 2026-02-02 01:06:10.544021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 01:06:10.544040 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:10.544105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 01:06:10.544118 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:06:10.544134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 01:06:10.544144 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:06:10.544153 | orchestrator | 2026-02-02 01:06:10.544162 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-02 01:06:10.544171 | orchestrator | Monday 02 February 2026 01:05:19 +0000 (0:00:00.869) 0:00:32.403 ******* 2026-02-02 01:06:10.544180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 01:06:10.544197 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:10.544206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 01:06:10.544216 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:06:10.544235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 01:06:10.544245 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:06:10.544254 | orchestrator | 2026-02-02 01:06:10.544263 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-02 01:06:10.544271 | orchestrator | Monday 02 February 2026 01:05:20 +0000 (0:00:01.454) 0:00:33.858 ******* 2026-02-02 01:06:10.544285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.544302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.544313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.544322 | orchestrator | 2026-02-02 01:06:10.544331 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-02 01:06:10.544341 | orchestrator | Monday 02 February 2026 01:05:22 +0000 (0:00:01.758) 0:00:35.616 ******* 2026-02-02 01:06:10.544356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.544371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.544388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.544398 | orchestrator | 2026-02-02 01:06:10.544407 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-02 01:06:10.544416 | orchestrator | Monday 02 February 2026 01:05:26 +0000 (0:00:04.292) 0:00:39.908 ******* 2026-02-02 01:06:10.544425 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-02-02 01:06:10.544435 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:10.544444 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-02-02 01:06:10.544453 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:06:10.544462 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-02-02 01:06:10.544470 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:06:10.544479 | orchestrator | 2026-02-02 01:06:10.544488 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-02-02 01:06:10.544496 | orchestrator | Monday 02 February 2026 01:05:27 +0000 (0:00:00.465) 0:00:40.374 ******* 2026-02-02 01:06:10.544505 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:06:10.544514 | orchestrator | 2026-02-02 01:06:10.544523 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-02-02 01:06:10.544537 | orchestrator | Monday 02 February 2026 01:05:27 +0000 (0:00:00.728) 0:00:41.103 ******* 2026-02-02 01:06:10.544546 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:06:10.544555 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:06:10.544564 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:06:10.544573 | orchestrator | 2026-02-02 01:06:10.544582 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-02 01:06:10.544590 | orchestrator | Monday 02 February 2026 01:05:31 +0000 (0:00:03.435) 0:00:44.538 ******* 2026-02-02 01:06:10.544599 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:06:10.544607 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:06:10.544616 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:06:10.544630 | orchestrator | 2026-02-02 01:06:10.544668 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-02 01:06:10.544686 | orchestrator | Monday 02 February 2026 01:05:33 +0000 (0:00:02.425) 0:00:46.964 ******* 2026-02-02 01:06:10.544725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 01:06:10.544743 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:10.544756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 01:06:10.544766 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:06:10.544776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 01:06:10.544835 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:06:10.544845 | orchestrator | 2026-02-02 01:06:10.544854 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-02-02 01:06:10.544863 | orchestrator | Monday 02 February 2026 01:05:34 +0000 (0:00:01.190) 0:00:48.155 ******* 2026-02-02 01:06:10.544882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.544907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.544918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 01:06:10.544928 | orchestrator | 2026-02-02 01:06:10.544938 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-02-02 01:06:10.544946 | orchestrator | Monday 02 February 2026 01:05:37 +0000 (0:00:02.107) 0:00:50.263 ******* 2026-02-02 01:06:10.544955 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:06:10.544964 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:06:10.544973 | orchestrator | } 2026-02-02 01:06:10.544982 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:06:10.544991 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:06:10.545000 | orchestrator | } 2026-02-02 01:06:10.545008 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:06:10.545017 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:06:10.545026 | orchestrator | } 2026-02-02 01:06:10.545035 | orchestrator | 2026-02-02 01:06:10.545044 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:06:10.545053 | orchestrator | Monday 02 February 2026 01:05:37 +0000 (0:00:00.545) 0:00:50.809 ******* 2026-02-02 01:06:10.545068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 01:06:10.545084 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:10.545099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 01:06:10.545109 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:06:10.545119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 01:06:10.545128 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:06:10.545138 | orchestrator | 2026-02-02 01:06:10.545147 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-02 01:06:10.545155 | orchestrator | Monday 02 February 2026 01:05:38 +0000 (0:00:01.041) 0:00:51.850 ******* 2026-02-02 01:06:10.545164 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:06:10.545173 | orchestrator | 2026-02-02 01:06:10.545186 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-02 01:06:10.545201 | orchestrator | Monday 02 February 2026 01:05:40 +0000 (0:00:02.007) 0:00:53.857 ******* 2026-02-02 01:06:10.545218 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:06:10.545235 | orchestrator | 2026-02-02 01:06:10.545251 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-02 01:06:10.545266 | orchestrator | Monday 02 February 2026 01:05:42 +0000 (0:00:02.106) 0:00:55.964 ******* 2026-02-02 01:06:10.545284 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:06:10.545310 | orchestrator | 2026-02-02 01:06:10.545326 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-02 01:06:10.545340 | orchestrator | Monday 02 February 2026 01:05:58 +0000 (0:00:15.291) 0:01:11.255 ******* 2026-02-02 01:06:10.545349 | orchestrator | 2026-02-02 01:06:10.545360 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-02 01:06:10.545376 | orchestrator | Monday 02 February 2026 01:05:58 +0000 (0:00:00.147) 0:01:11.403 ******* 2026-02-02 01:06:10.545391 | orchestrator | 2026-02-02 01:06:10.545406 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-02 01:06:10.545422 | orchestrator | Monday 02 February 2026 01:05:58 +0000 (0:00:00.500) 0:01:11.904 ******* 2026-02-02 01:06:10.545437 | orchestrator | 2026-02-02 01:06:10.545451 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-02 01:06:10.545464 | orchestrator | Monday 02 February 2026 01:05:58 +0000 (0:00:00.080) 0:01:11.984 ******* 2026-02-02 01:06:10.545473 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:06:10.545482 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:06:10.545490 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:06:10.545499 | orchestrator | 2026-02-02 01:06:10.545515 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:06:10.545550 | orchestrator | testbed-node-0 : ok=23  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-02 01:06:10.545563 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 01:06:10.545581 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 01:06:10.545596 | orchestrator | 2026-02-02 01:06:10.545612 | orchestrator | 2026-02-02 01:06:10.545627 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:06:10.545643 | orchestrator | Monday 02 February 2026 01:06:09 +0000 (0:00:10.982) 0:01:22.966 ******* 2026-02-02 01:06:10.545741 | orchestrator | =============================================================================== 2026-02-02 01:06:10.545754 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.29s 2026-02-02 01:06:10.545763 | orchestrator | placement : Restart placement-api container ---------------------------- 10.98s 2026-02-02 01:06:10.545771 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 6.33s 2026-02-02 01:06:10.545780 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.29s 2026-02-02 01:06:10.545789 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.82s 2026-02-02 01:06:10.545798 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 3.80s 2026-02-02 01:06:10.545807 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.64s 2026-02-02 01:06:10.545821 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 3.47s 2026-02-02 01:06:10.545836 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 3.44s 2026-02-02 01:06:10.545892 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.05s 2026-02-02 01:06:10.545911 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.43s 2026-02-02 01:06:10.545928 | orchestrator | service-check-containers : placement | Check containers ----------------- 2.11s 2026-02-02 01:06:10.545943 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.11s 2026-02-02 01:06:10.545957 | orchestrator | placement : Creating placement databases -------------------------------- 2.01s 2026-02-02 01:06:10.545972 | orchestrator | placement : Copying over config.json files for services ----------------- 1.76s 2026-02-02 01:06:10.545981 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.71s 2026-02-02 01:06:10.545993 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.45s 2026-02-02 01:06:10.546091 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.31s 2026-02-02 01:06:10.546112 | orchestrator | placement : Copying over existing policy file --------------------------- 1.19s 2026-02-02 01:06:10.546122 | orchestrator | placement : Set placement policy file ----------------------------------- 1.08s 2026-02-02 01:06:10.546131 | orchestrator | 2026-02-02 01:06:10 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:06:10.546141 | orchestrator | 2026-02-02 01:06:10 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:10.546150 | orchestrator | 2026-02-02 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:13.581531 | orchestrator | 2026-02-02 01:06:13 | INFO  | Task e719ef0c-d533-4793-b600-dd5987000a7f is in state STARTED 2026-02-02 01:06:13.583549 | orchestrator | 2026-02-02 01:06:13 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:13.586952 | orchestrator | 2026-02-02 01:06:13 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:06:13.588499 | orchestrator | 2026-02-02 01:06:13 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:13.588515 | orchestrator | 2026-02-02 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:16.636556 | orchestrator | 2026-02-02 01:06:16 | INFO  | Task e719ef0c-d533-4793-b600-dd5987000a7f is in state SUCCESS 2026-02-02 01:06:16.640760 | orchestrator | 2026-02-02 01:06:16 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:16.642582 | orchestrator | 2026-02-02 01:06:16 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:06:16.644298 | orchestrator | 2026-02-02 01:06:16 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:16.644320 | orchestrator | 2026-02-02 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:19.675934 | orchestrator | 2026-02-02 01:06:19 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:19.676242 | orchestrator | 2026-02-02 01:06:19 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:06:19.677184 | orchestrator | 2026-02-02 01:06:19 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:06:19.677918 | orchestrator | 2026-02-02 01:06:19 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:19.677961 | orchestrator | 2026-02-02 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:22.717743 | orchestrator | 2026-02-02 01:06:22 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:22.720287 | orchestrator | 2026-02-02 01:06:22 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:06:22.722727 | orchestrator | 2026-02-02 01:06:22 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:06:22.725272 | orchestrator | 2026-02-02 01:06:22 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:22.725329 | orchestrator | 2026-02-02 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:25.752513 | orchestrator | 2026-02-02 01:06:25 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:25.754735 | orchestrator | 2026-02-02 01:06:25 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:06:25.757874 | orchestrator | 2026-02-02 01:06:25 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:06:25.758942 | orchestrator | 2026-02-02 01:06:25 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:25.758977 | orchestrator | 2026-02-02 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:28.808686 | orchestrator | 2026-02-02 01:06:28 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:28.808798 | orchestrator | 2026-02-02 01:06:28 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:06:28.810339 | orchestrator | 2026-02-02 01:06:28 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:06:28.811131 | orchestrator | 2026-02-02 01:06:28 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:28.811155 | orchestrator | 2026-02-02 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:31.888610 | orchestrator | 2026-02-02 01:06:31 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:31.889753 | orchestrator | 2026-02-02 01:06:31 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:06:31.891570 | orchestrator | 2026-02-02 01:06:31 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:06:31.892775 | orchestrator | 2026-02-02 01:06:31 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:31.893340 | orchestrator | 2026-02-02 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:34.953255 | orchestrator | 2026-02-02 01:06:34 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:34.953341 | orchestrator | 2026-02-02 01:06:34 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:06:34.953986 | orchestrator | 2026-02-02 01:06:34 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:06:34.954926 | orchestrator | 2026-02-02 01:06:34 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:34.954950 | orchestrator | 2026-02-02 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:38.000740 | orchestrator | 2026-02-02 01:06:38 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:38.000948 | orchestrator | 2026-02-02 01:06:38 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:06:38.002164 | orchestrator | 2026-02-02 01:06:38 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state STARTED 2026-02-02 01:06:38.003328 | orchestrator | 2026-02-02 01:06:38 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:38.003365 | orchestrator | 2026-02-02 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:41.081705 | orchestrator | 2026-02-02 01:06:41 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:41.082981 | orchestrator | 2026-02-02 01:06:41 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:06:41.083538 | orchestrator | 2026-02-02 01:06:41 | INFO  | Task 74bbea2d-c6b9-454a-8001-1883262f0e0c is in state STARTED 2026-02-02 01:06:41.085670 | orchestrator | 2026-02-02 01:06:41 | INFO  | Task 49d7ef66-ddb1-4dea-afa1-085cda793c2f is in state SUCCESS 2026-02-02 01:06:41.087613 | orchestrator | 2026-02-02 01:06:41.087718 | orchestrator | 2026-02-02 01:06:41.087744 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:06:41.087765 | orchestrator | 2026-02-02 01:06:41.087779 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:06:41.087822 | orchestrator | Monday 02 February 2026 01:06:14 +0000 (0:00:00.166) 0:00:00.166 ******* 2026-02-02 01:06:41.087844 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:06:41.087864 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:06:41.087882 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:06:41.087900 | orchestrator | 2026-02-02 01:06:41.087918 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:06:41.087936 | orchestrator | Monday 02 February 2026 01:06:15 +0000 (0:00:00.288) 0:00:00.454 ******* 2026-02-02 01:06:41.087953 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-02 01:06:41.087972 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-02 01:06:41.087992 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-02 01:06:41.088012 | orchestrator | 2026-02-02 01:06:41.088492 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-02-02 01:06:41.088506 | orchestrator | 2026-02-02 01:06:41.088517 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-02-02 01:06:41.088541 | orchestrator | Monday 02 February 2026 01:06:15 +0000 (0:00:00.600) 0:00:01.055 ******* 2026-02-02 01:06:41.088554 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:06:41.088565 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:06:41.088576 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:06:41.088588 | orchestrator | 2026-02-02 01:06:41.088599 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:06:41.088611 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:06:41.088648 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:06:41.088670 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:06:41.088689 | orchestrator | 2026-02-02 01:06:41.088700 | orchestrator | 2026-02-02 01:06:41.088712 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:06:41.088723 | orchestrator | Monday 02 February 2026 01:06:16 +0000 (0:00:00.642) 0:00:01.698 ******* 2026-02-02 01:06:41.088803 | orchestrator | =============================================================================== 2026-02-02 01:06:41.088817 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.64s 2026-02-02 01:06:41.088828 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-02-02 01:06:41.088840 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-02-02 01:06:41.088852 | orchestrator | 2026-02-02 01:06:41.088863 | orchestrator | 2026-02-02 01:06:41.088874 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:06:41.088885 | orchestrator | 2026-02-02 01:06:41.088897 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:06:41.088908 | orchestrator | Monday 02 February 2026 01:03:25 +0000 (0:00:00.235) 0:00:00.236 ******* 2026-02-02 01:06:41.088919 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:06:41.088930 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:06:41.088942 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:06:41.088953 | orchestrator | 2026-02-02 01:06:41.088964 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:06:41.088975 | orchestrator | Monday 02 February 2026 01:03:25 +0000 (0:00:00.354) 0:00:00.590 ******* 2026-02-02 01:06:41.088987 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-02 01:06:41.088999 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-02 01:06:41.089010 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-02 01:06:41.089021 | orchestrator | 2026-02-02 01:06:41.089032 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-02 01:06:41.089043 | orchestrator | 2026-02-02 01:06:41.089469 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-02 01:06:41.089482 | orchestrator | Monday 02 February 2026 01:03:25 +0000 (0:00:00.352) 0:00:00.943 ******* 2026-02-02 01:06:41.089494 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:06:41.089505 | orchestrator | 2026-02-02 01:06:41.089516 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-02-02 01:06:41.089527 | orchestrator | Monday 02 February 2026 01:03:26 +0000 (0:00:00.608) 0:00:01.551 ******* 2026-02-02 01:06:41.089539 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-02 01:06:41.089550 | orchestrator | 2026-02-02 01:06:41.089561 | orchestrator | TASK [service-ks-register : designate | Creating/deleting endpoints] *********** 2026-02-02 01:06:41.089572 | orchestrator | Monday 02 February 2026 01:03:30 +0000 (0:00:04.106) 0:00:05.658 ******* 2026-02-02 01:06:41.089583 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-02 01:06:41.089595 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-02 01:06:41.089606 | orchestrator | 2026-02-02 01:06:41.089618 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-02 01:06:41.089683 | orchestrator | Monday 02 February 2026 01:03:38 +0000 (0:00:07.422) 0:00:13.081 ******* 2026-02-02 01:06:41.089696 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 01:06:41.089707 | orchestrator | 2026-02-02 01:06:41.089718 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-02 01:06:41.089730 | orchestrator | Monday 02 February 2026 01:03:41 +0000 (0:00:03.382) 0:00:16.463 ******* 2026-02-02 01:06:41.089803 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-02 01:06:41.089828 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 01:06:41.089848 | orchestrator | 2026-02-02 01:06:41.089869 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-02 01:06:41.089888 | orchestrator | Monday 02 February 2026 01:03:45 +0000 (0:00:04.335) 0:00:20.798 ******* 2026-02-02 01:06:41.089909 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 01:06:41.089931 | orchestrator | 2026-02-02 01:06:41.089951 | orchestrator | TASK [service-ks-register : designate | Granting/revoking user roles] ********** 2026-02-02 01:06:41.089970 | orchestrator | Monday 02 February 2026 01:03:49 +0000 (0:00:03.494) 0:00:24.292 ******* 2026-02-02 01:06:41.089982 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-02 01:06:41.089993 | orchestrator | 2026-02-02 01:06:41.090004 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-02 01:06:41.090013 | orchestrator | Monday 02 February 2026 01:03:53 +0000 (0:00:03.830) 0:00:28.123 ******* 2026-02-02 01:06:41.090078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.090095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.090118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.090171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090423 | orchestrator | 2026-02-02 01:06:41.090433 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-02 01:06:41.090443 | orchestrator | Monday 02 February 2026 01:03:56 +0000 (0:00:03.431) 0:00:31.555 ******* 2026-02-02 01:06:41.090457 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:41.090468 | orchestrator | 2026-02-02 01:06:41.090478 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-02 01:06:41.090492 | orchestrator | Monday 02 February 2026 01:03:56 +0000 (0:00:00.138) 0:00:31.693 ******* 2026-02-02 01:06:41.090502 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:41.090512 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:06:41.090522 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:06:41.090538 | orchestrator | 2026-02-02 01:06:41.090548 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-02 01:06:41.090558 | orchestrator | Monday 02 February 2026 01:03:57 +0000 (0:00:00.365) 0:00:32.058 ******* 2026-02-02 01:06:41.090568 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:06:41.090578 | orchestrator | 2026-02-02 01:06:41.090588 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-02 01:06:41.090598 | orchestrator | Monday 02 February 2026 01:03:57 +0000 (0:00:00.782) 0:00:32.841 ******* 2026-02-02 01:06:41.090609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.090620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.090674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.090692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.090995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.091019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.091029 | orchestrator | 2026-02-02 01:06:41.091040 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-02 01:06:41.091050 | orchestrator | Monday 02 February 2026 01:04:04 +0000 (0:00:06.665) 0:00:39.506 ******* 2026-02-02 01:06:41.091060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.091071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.091131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 01:06:41.091157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.091222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 01:06:41.091238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 01:06:41.091353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091388 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:41.091406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091531 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:06:41.091547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091568 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:06:41.091578 | orchestrator | 2026-02-02 01:06:41.091588 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-02 01:06:41.091598 | orchestrator | Monday 02 February 2026 01:04:07 +0000 (0:00:02.491) 0:00:41.998 ******* 2026-02-02 01:06:41.091615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.091658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 01:06:41.091704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.091729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 01:06:41.091751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.091762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 01:06:41.091862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.091951 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:41.092010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.092029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.092040 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:06:41.092051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.092061 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:06:41.092071 | orchestrator | 2026-02-02 01:06:41.092081 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-02 01:06:41.092091 | orchestrator | Monday 02 February 2026 01:04:10 +0000 (0:00:03.495) 0:00:45.493 ******* 2026-02-02 01:06:41.092102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.092119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.092157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.092173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092401 | orchestrator | 2026-02-02 01:06:41.092411 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-02 01:06:41.092421 | orchestrator | Monday 02 February 2026 01:04:17 +0000 (0:00:06.930) 0:00:52.424 ******* 2026-02-02 01:06:41.092431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.092447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.092507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.092524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.092820 | orchestrator | 2026-02-02 01:06:41.092830 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-02 01:06:41.092840 | orchestrator | Monday 02 February 2026 01:04:40 +0000 (0:00:22.950) 0:01:15.375 ******* 2026-02-02 01:06:41.092850 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-02 01:06:41.092860 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-02 01:06:41.092870 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-02 01:06:41.092880 | orchestrator | 2026-02-02 01:06:41.092890 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-02 01:06:41.092900 | orchestrator | Monday 02 February 2026 01:04:48 +0000 (0:00:07.709) 0:01:23.084 ******* 2026-02-02 01:06:41.092910 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-02 01:06:41.092920 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-02 01:06:41.092930 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-02 01:06:41.092940 | orchestrator | 2026-02-02 01:06:41.092950 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-02 01:06:41.092959 | orchestrator | Monday 02 February 2026 01:04:51 +0000 (0:00:03.637) 0:01:26.722 ******* 2026-02-02 01:06:41.092970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.093008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.093028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.093045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093232 | orchestrator | 2026-02-02 01:06:41.093241 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-02 01:06:41.093249 | orchestrator | Monday 02 February 2026 01:04:55 +0000 (0:00:03.897) 0:01:30.619 ******* 2026-02-02 01:06:41.093262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.093276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.093288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.093297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093450 | orchestrator | 2026-02-02 01:06:41.093458 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-02 01:06:41.093467 | orchestrator | Monday 02 February 2026 01:04:58 +0000 (0:00:02.995) 0:01:33.615 ******* 2026-02-02 01:06:41.093475 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:41.093483 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:06:41.093491 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:06:41.093499 | orchestrator | 2026-02-02 01:06:41.093507 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-02 01:06:41.093515 | orchestrator | Monday 02 February 2026 01:04:59 +0000 (0:00:00.762) 0:01:34.377 ******* 2026-02-02 01:06:41.093529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.093546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 01:06:41.093555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093588 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:06:41.093601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.093620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 01:06:41.093643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.093669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 01:06:41.093688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093725 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:41.093734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.093755 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:06:41.093763 | orchestrator | 2026-02-02 01:06:41.093771 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-02-02 01:06:41.093779 | orchestrator | Monday 02 February 2026 01:05:00 +0000 (0:00:01.052) 0:01:35.430 ******* 2026-02-02 01:06:41.093792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.093805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.093813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:06:41.093822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:06:41.093978 | orchestrator | 2026-02-02 01:06:41.093986 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-02-02 01:06:41.093994 | orchestrator | Monday 02 February 2026 01:05:05 +0000 (0:00:05.146) 0:01:40.576 ******* 2026-02-02 01:06:41.094003 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:06:41.094011 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:06:41.094050 | orchestrator | } 2026-02-02 01:06:41.094059 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:06:41.094067 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:06:41.094075 | orchestrator | } 2026-02-02 01:06:41.094083 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:06:41.094091 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:06:41.094104 | orchestrator | } 2026-02-02 01:06:41.094112 | orchestrator | 2026-02-02 01:06:41.094121 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:06:41.094129 | orchestrator | Monday 02 February 2026 01:05:06 +0000 (0:00:00.891) 0:01:41.468 ******* 2026-02-02 01:06:41.094141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.094150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 01:06:41.094159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.094174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.094182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.094195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.094203 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:41.094215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.094224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 01:06:41.094232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.094245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.094254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.094266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.094275 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:06:41.094286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:06:41.094295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 01:06:41.094303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.094319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.094328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.094340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:06:41.094348 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:06:41.094357 | orchestrator | 2026-02-02 01:06:41.094366 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-02 01:06:41.094374 | orchestrator | Monday 02 February 2026 01:05:08 +0000 (0:00:02.092) 0:01:43.561 ******* 2026-02-02 01:06:41.094382 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:06:41.094391 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:06:41.094399 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:06:41.094407 | orchestrator | 2026-02-02 01:06:41.094415 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-02 01:06:41.094423 | orchestrator | Monday 02 February 2026 01:05:09 +0000 (0:00:00.587) 0:01:44.148 ******* 2026-02-02 01:06:41.094431 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-02 01:06:41.094439 | orchestrator | 2026-02-02 01:06:41.094447 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-02 01:06:41.094455 | orchestrator | Monday 02 February 2026 01:05:11 +0000 (0:00:02.326) 0:01:46.475 ******* 2026-02-02 01:06:41.094463 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 01:06:41.094475 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-02 01:06:41.094483 | orchestrator | 2026-02-02 01:06:41.094491 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-02 01:06:41.094499 | orchestrator | Monday 02 February 2026 01:05:13 +0000 (0:00:02.397) 0:01:48.872 ******* 2026-02-02 01:06:41.094508 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:06:41.094516 | orchestrator | 2026-02-02 01:06:41.094524 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-02 01:06:41.094536 | orchestrator | Monday 02 February 2026 01:05:30 +0000 (0:00:16.702) 0:02:05.575 ******* 2026-02-02 01:06:41.094545 | orchestrator | 2026-02-02 01:06:41.094553 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-02 01:06:41.094561 | orchestrator | Monday 02 February 2026 01:05:30 +0000 (0:00:00.120) 0:02:05.695 ******* 2026-02-02 01:06:41.094569 | orchestrator | 2026-02-02 01:06:41.094577 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-02 01:06:41.094585 | orchestrator | Monday 02 February 2026 01:05:30 +0000 (0:00:00.170) 0:02:05.866 ******* 2026-02-02 01:06:41.094593 | orchestrator | 2026-02-02 01:06:41.094601 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-02 01:06:41.094609 | orchestrator | Monday 02 February 2026 01:05:31 +0000 (0:00:00.141) 0:02:06.008 ******* 2026-02-02 01:06:41.094617 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:06:41.094639 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:06:41.094648 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:06:41.094656 | orchestrator | 2026-02-02 01:06:41.094664 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-02 01:06:41.094672 | orchestrator | Monday 02 February 2026 01:05:41 +0000 (0:00:10.058) 0:02:16.066 ******* 2026-02-02 01:06:41.094680 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:06:41.094688 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:06:41.094697 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:06:41.094705 | orchestrator | 2026-02-02 01:06:41.094713 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-02 01:06:41.094721 | orchestrator | Monday 02 February 2026 01:05:51 +0000 (0:00:10.773) 0:02:26.840 ******* 2026-02-02 01:06:41.094729 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:06:41.094738 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:06:41.094746 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:06:41.094754 | orchestrator | 2026-02-02 01:06:41.094762 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-02 01:06:41.094770 | orchestrator | Monday 02 February 2026 01:05:57 +0000 (0:00:05.618) 0:02:32.458 ******* 2026-02-02 01:06:41.094778 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:06:41.094786 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:06:41.094794 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:06:41.094803 | orchestrator | 2026-02-02 01:06:41.094811 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-02 01:06:41.094819 | orchestrator | Monday 02 February 2026 01:06:08 +0000 (0:00:10.969) 0:02:43.427 ******* 2026-02-02 01:06:41.094827 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:06:41.094835 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:06:41.094844 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:06:41.094852 | orchestrator | 2026-02-02 01:06:41.094860 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-02 01:06:41.094868 | orchestrator | Monday 02 February 2026 01:06:18 +0000 (0:00:09.913) 0:02:53.341 ******* 2026-02-02 01:06:41.094876 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:06:41.094885 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:06:41.094893 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:06:41.094900 | orchestrator | 2026-02-02 01:06:41.094908 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-02 01:06:41.094917 | orchestrator | Monday 02 February 2026 01:06:28 +0000 (0:00:10.494) 0:03:03.836 ******* 2026-02-02 01:06:41.094925 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:06:41.094933 | orchestrator | 2026-02-02 01:06:41.094941 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:06:41.094949 | orchestrator | testbed-node-0 : ok=30  changed=24  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-02 01:06:41.094958 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 01:06:41.094975 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 01:06:41.094984 | orchestrator | 2026-02-02 01:06:41.094992 | orchestrator | 2026-02-02 01:06:41.095000 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:06:41.095009 | orchestrator | Monday 02 February 2026 01:06:37 +0000 (0:00:08.266) 0:03:12.103 ******* 2026-02-02 01:06:41.095017 | orchestrator | =============================================================================== 2026-02-02 01:06:41.095025 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.95s 2026-02-02 01:06:41.095033 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.70s 2026-02-02 01:06:41.095041 | orchestrator | designate : Restart designate-producer container ----------------------- 10.97s 2026-02-02 01:06:41.095049 | orchestrator | designate : Restart designate-api container ---------------------------- 10.77s 2026-02-02 01:06:41.095057 | orchestrator | designate : Restart designate-worker container ------------------------- 10.49s 2026-02-02 01:06:41.095065 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.06s 2026-02-02 01:06:41.095073 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.91s 2026-02-02 01:06:41.095081 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.27s 2026-02-02 01:06:41.095093 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.71s 2026-02-02 01:06:41.095101 | orchestrator | service-ks-register : designate | Creating/deleting endpoints ----------- 7.42s 2026-02-02 01:06:41.095109 | orchestrator | designate : Copying over config.json files for services ----------------- 6.93s 2026-02-02 01:06:41.095117 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.67s 2026-02-02 01:06:41.095125 | orchestrator | designate : Restart designate-central container ------------------------- 5.62s 2026-02-02 01:06:41.095134 | orchestrator | service-check-containers : designate | Check containers ----------------- 5.15s 2026-02-02 01:06:41.095141 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.34s 2026-02-02 01:06:41.095149 | orchestrator | service-ks-register : designate | Creating/deleting services ------------ 4.11s 2026-02-02 01:06:41.095158 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.90s 2026-02-02 01:06:41.095166 | orchestrator | service-ks-register : designate | Granting/revoking user roles ---------- 3.83s 2026-02-02 01:06:41.095173 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.64s 2026-02-02 01:06:41.095182 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS key --- 3.50s 2026-02-02 01:06:41.095190 | orchestrator | 2026-02-02 01:06:41 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:41.095199 | orchestrator | 2026-02-02 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:44.117020 | orchestrator | 2026-02-02 01:06:44 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:44.117868 | orchestrator | 2026-02-02 01:06:44 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:06:44.118992 | orchestrator | 2026-02-02 01:06:44 | INFO  | Task 74bbea2d-c6b9-454a-8001-1883262f0e0c is in state STARTED 2026-02-02 01:06:44.121088 | orchestrator | 2026-02-02 01:06:44 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:44.121144 | orchestrator | 2026-02-02 01:06:44 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:47.170242 | orchestrator | 2026-02-02 01:06:47 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:47.170772 | orchestrator | 2026-02-02 01:06:47 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:06:47.171512 | orchestrator | 2026-02-02 01:06:47 | INFO  | Task 74bbea2d-c6b9-454a-8001-1883262f0e0c is in state STARTED 2026-02-02 01:06:47.172889 | orchestrator | 2026-02-02 01:06:47 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:47.172908 | orchestrator | 2026-02-02 01:06:47 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:50.247086 | orchestrator | 2026-02-02 01:06:50 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:50.247837 | orchestrator | 2026-02-02 01:06:50 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:06:50.248639 | orchestrator | 2026-02-02 01:06:50 | INFO  | Task 74bbea2d-c6b9-454a-8001-1883262f0e0c is in state STARTED 2026-02-02 01:06:50.249330 | orchestrator | 2026-02-02 01:06:50 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:50.249367 | orchestrator | 2026-02-02 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:53.283014 | orchestrator | 2026-02-02 01:06:53 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:53.284488 | orchestrator | 2026-02-02 01:06:53 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:06:53.286399 | orchestrator | 2026-02-02 01:06:53 | INFO  | Task 74bbea2d-c6b9-454a-8001-1883262f0e0c is in state STARTED 2026-02-02 01:06:53.287846 | orchestrator | 2026-02-02 01:06:53 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:53.287894 | orchestrator | 2026-02-02 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:56.326285 | orchestrator | 2026-02-02 01:06:56 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:56.327362 | orchestrator | 2026-02-02 01:06:56 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:06:56.328678 | orchestrator | 2026-02-02 01:06:56 | INFO  | Task 74bbea2d-c6b9-454a-8001-1883262f0e0c is in state STARTED 2026-02-02 01:06:56.329942 | orchestrator | 2026-02-02 01:06:56 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:56.330286 | orchestrator | 2026-02-02 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:06:59.382301 | orchestrator | 2026-02-02 01:06:59 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:06:59.383750 | orchestrator | 2026-02-02 01:06:59 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:06:59.385205 | orchestrator | 2026-02-02 01:06:59 | INFO  | Task 74bbea2d-c6b9-454a-8001-1883262f0e0c is in state STARTED 2026-02-02 01:06:59.386372 | orchestrator | 2026-02-02 01:06:59 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:06:59.386425 | orchestrator | 2026-02-02 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:02.443236 | orchestrator | 2026-02-02 01:07:02 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:02.444117 | orchestrator | 2026-02-02 01:07:02 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:02.446951 | orchestrator | 2026-02-02 01:07:02 | INFO  | Task 74bbea2d-c6b9-454a-8001-1883262f0e0c is in state STARTED 2026-02-02 01:07:02.447913 | orchestrator | 2026-02-02 01:07:02 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:07:02.448151 | orchestrator | 2026-02-02 01:07:02 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:05.491962 | orchestrator | 2026-02-02 01:07:05 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:05.492547 | orchestrator | 2026-02-02 01:07:05 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:05.493638 | orchestrator | 2026-02-02 01:07:05 | INFO  | Task 74bbea2d-c6b9-454a-8001-1883262f0e0c is in state STARTED 2026-02-02 01:07:05.494627 | orchestrator | 2026-02-02 01:07:05 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:07:05.494674 | orchestrator | 2026-02-02 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:08.532013 | orchestrator | 2026-02-02 01:07:08 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:08.533576 | orchestrator | 2026-02-02 01:07:08 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:08.535062 | orchestrator | 2026-02-02 01:07:08 | INFO  | Task 74bbea2d-c6b9-454a-8001-1883262f0e0c is in state STARTED 2026-02-02 01:07:08.536567 | orchestrator | 2026-02-02 01:07:08 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:07:08.536685 | orchestrator | 2026-02-02 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:11.571763 | orchestrator | 2026-02-02 01:07:11 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:11.571990 | orchestrator | 2026-02-02 01:07:11 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:11.572907 | orchestrator | 2026-02-02 01:07:11 | INFO  | Task 74bbea2d-c6b9-454a-8001-1883262f0e0c is in state STARTED 2026-02-02 01:07:11.573718 | orchestrator | 2026-02-02 01:07:11 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:07:11.573843 | orchestrator | 2026-02-02 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:14.609282 | orchestrator | 2026-02-02 01:07:14 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:14.609533 | orchestrator | 2026-02-02 01:07:14 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:14.610336 | orchestrator | 2026-02-02 01:07:14 | INFO  | Task 74bbea2d-c6b9-454a-8001-1883262f0e0c is in state STARTED 2026-02-02 01:07:14.610840 | orchestrator | 2026-02-02 01:07:14 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:07:14.610883 | orchestrator | 2026-02-02 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:17.645559 | orchestrator | 2026-02-02 01:07:17 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:17.646060 | orchestrator | 2026-02-02 01:07:17 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:17.648436 | orchestrator | 2026-02-02 01:07:17 | INFO  | Task 74bbea2d-c6b9-454a-8001-1883262f0e0c is in state STARTED 2026-02-02 01:07:17.648739 | orchestrator | 2026-02-02 01:07:17 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:07:17.648948 | orchestrator | 2026-02-02 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:20.683223 | orchestrator | 2026-02-02 01:07:20 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:20.683301 | orchestrator | 2026-02-02 01:07:20 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:20.683707 | orchestrator | 2026-02-02 01:07:20 | INFO  | Task 74bbea2d-c6b9-454a-8001-1883262f0e0c is in state SUCCESS 2026-02-02 01:07:20.684403 | orchestrator | 2026-02-02 01:07:20 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:07:20.685102 | orchestrator | 2026-02-02 01:07:20 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:07:20.685127 | orchestrator | 2026-02-02 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:23.719286 | orchestrator | 2026-02-02 01:07:23 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:23.722706 | orchestrator | 2026-02-02 01:07:23 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:23.724539 | orchestrator | 2026-02-02 01:07:23 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:07:23.725990 | orchestrator | 2026-02-02 01:07:23 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:07:23.726201 | orchestrator | 2026-02-02 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:26.765444 | orchestrator | 2026-02-02 01:07:26 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:26.765953 | orchestrator | 2026-02-02 01:07:26 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:26.769257 | orchestrator | 2026-02-02 01:07:26 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:07:26.769303 | orchestrator | 2026-02-02 01:07:26 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:07:26.769312 | orchestrator | 2026-02-02 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:29.811529 | orchestrator | 2026-02-02 01:07:29 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:29.813384 | orchestrator | 2026-02-02 01:07:29 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:29.815265 | orchestrator | 2026-02-02 01:07:29 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:07:29.816860 | orchestrator | 2026-02-02 01:07:29 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:07:29.816882 | orchestrator | 2026-02-02 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:32.857978 | orchestrator | 2026-02-02 01:07:32 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:32.859121 | orchestrator | 2026-02-02 01:07:32 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:32.862433 | orchestrator | 2026-02-02 01:07:32 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:07:32.863377 | orchestrator | 2026-02-02 01:07:32 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state STARTED 2026-02-02 01:07:32.863484 | orchestrator | 2026-02-02 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:35.905255 | orchestrator | 2026-02-02 01:07:35 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:35.911643 | orchestrator | 2026-02-02 01:07:35 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:35.915877 | orchestrator | 2026-02-02 01:07:35 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:07:35.917240 | orchestrator | 2026-02-02 01:07:35 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:07:35.919196 | orchestrator | 2026-02-02 01:07:35 | INFO  | Task 2b4488cc-480f-4ee8-a98c-24e7e5fbfdbd is in state SUCCESS 2026-02-02 01:07:35.921477 | orchestrator | 2026-02-02 01:07:35.921514 | orchestrator | 2026-02-02 01:07:35.921521 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:07:35.921544 | orchestrator | 2026-02-02 01:07:35.921548 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:07:35.921553 | orchestrator | Monday 02 February 2026 01:06:43 +0000 (0:00:00.339) 0:00:00.339 ******* 2026-02-02 01:07:35.921557 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:07:35.921562 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:07:35.921566 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:07:35.921570 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:07:35.921650 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:07:35.921654 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:07:35.921658 | orchestrator | ok: [testbed-manager] 2026-02-02 01:07:35.921662 | orchestrator | 2026-02-02 01:07:35.921666 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:07:35.921670 | orchestrator | Monday 02 February 2026 01:06:44 +0000 (0:00:00.953) 0:00:01.292 ******* 2026-02-02 01:07:35.921684 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-02 01:07:35.921689 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-02 01:07:35.921693 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-02 01:07:35.921697 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-02 01:07:35.921701 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-02 01:07:35.921704 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-02 01:07:35.921709 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-02 01:07:35.921713 | orchestrator | 2026-02-02 01:07:35.921716 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-02 01:07:35.921720 | orchestrator | 2026-02-02 01:07:35.921724 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-02 01:07:35.921728 | orchestrator | Monday 02 February 2026 01:06:45 +0000 (0:00:00.980) 0:00:02.272 ******* 2026-02-02 01:07:35.921733 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2026-02-02 01:07:35.921738 | orchestrator | 2026-02-02 01:07:35.921743 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting services] ************* 2026-02-02 01:07:35.921747 | orchestrator | Monday 02 February 2026 01:06:48 +0000 (0:00:02.209) 0:00:04.481 ******* 2026-02-02 01:07:35.921751 | orchestrator | changed: [testbed-node-3] => (item=swift (object-store)) 2026-02-02 01:07:35.921755 | orchestrator | 2026-02-02 01:07:35.921759 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting endpoints] ************ 2026-02-02 01:07:35.921762 | orchestrator | Monday 02 February 2026 01:06:51 +0000 (0:00:02.916) 0:00:07.398 ******* 2026-02-02 01:07:35.921767 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-02 01:07:35.921773 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-02 01:07:35.921777 | orchestrator | 2026-02-02 01:07:35.921781 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-02 01:07:35.921784 | orchestrator | Monday 02 February 2026 01:06:56 +0000 (0:00:05.953) 0:00:13.351 ******* 2026-02-02 01:07:35.921788 | orchestrator | ok: [testbed-node-3] => (item=service) 2026-02-02 01:07:35.921792 | orchestrator | 2026-02-02 01:07:35.921796 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-02 01:07:35.921800 | orchestrator | Monday 02 February 2026 01:07:00 +0000 (0:00:03.304) 0:00:16.656 ******* 2026-02-02 01:07:35.921804 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service) 2026-02-02 01:07:35.921808 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 01:07:35.921812 | orchestrator | 2026-02-02 01:07:35.921816 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-02 01:07:35.921824 | orchestrator | Monday 02 February 2026 01:07:04 +0000 (0:00:03.999) 0:00:20.656 ******* 2026-02-02 01:07:35.921828 | orchestrator | ok: [testbed-node-3] => (item=admin) 2026-02-02 01:07:35.921833 | orchestrator | changed: [testbed-node-3] => (item=ResellerAdmin) 2026-02-02 01:07:35.921836 | orchestrator | 2026-02-02 01:07:35.921851 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting/revoking user roles] *********** 2026-02-02 01:07:35.921855 | orchestrator | Monday 02 February 2026 01:07:10 +0000 (0:00:06.457) 0:00:27.118 ******* 2026-02-02 01:07:35.921859 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service -> admin) 2026-02-02 01:07:35.921863 | orchestrator | 2026-02-02 01:07:35.921867 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:07:35.921871 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:07:35.921875 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:07:35.922052 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:07:35.922065 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:07:35.922071 | orchestrator | testbed-node-3 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:07:35.922089 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:07:35.922095 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:07:35.922102 | orchestrator | 2026-02-02 01:07:35.922109 | orchestrator | 2026-02-02 01:07:35.922115 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:07:35.922122 | orchestrator | Monday 02 February 2026 01:07:16 +0000 (0:00:05.950) 0:00:33.069 ******* 2026-02-02 01:07:35.922128 | orchestrator | =============================================================================== 2026-02-02 01:07:35.922135 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.46s 2026-02-02 01:07:35.922142 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting endpoints ------------ 5.95s 2026-02-02 01:07:35.922155 | orchestrator | service-ks-register : ceph-rgw | Granting/revoking user roles ----------- 5.95s 2026-02-02 01:07:35.922161 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.00s 2026-02-02 01:07:35.922167 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.30s 2026-02-02 01:07:35.922173 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting services ------------- 2.92s 2026-02-02 01:07:35.922180 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.21s 2026-02-02 01:07:35.922187 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2026-02-02 01:07:35.922193 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.95s 2026-02-02 01:07:35.922200 | orchestrator | 2026-02-02 01:07:35.922206 | orchestrator | 2026-02-02 01:07:35.922212 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:07:35.922218 | orchestrator | 2026-02-02 01:07:35.922225 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:07:35.922231 | orchestrator | Monday 02 February 2026 01:05:37 +0000 (0:00:00.362) 0:00:00.362 ******* 2026-02-02 01:07:35.922237 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:07:35.922245 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:07:35.922251 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:07:35.922257 | orchestrator | 2026-02-02 01:07:35.922263 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:07:35.922269 | orchestrator | Monday 02 February 2026 01:05:38 +0000 (0:00:00.717) 0:00:01.080 ******* 2026-02-02 01:07:35.922284 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-02 01:07:35.922290 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-02 01:07:35.922297 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-02 01:07:35.922303 | orchestrator | 2026-02-02 01:07:35.922310 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-02 01:07:35.922317 | orchestrator | 2026-02-02 01:07:35.922324 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-02 01:07:35.922330 | orchestrator | Monday 02 February 2026 01:05:39 +0000 (0:00:00.714) 0:00:01.795 ******* 2026-02-02 01:07:35.922337 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:07:35.922344 | orchestrator | 2026-02-02 01:07:35.922351 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-02-02 01:07:35.922357 | orchestrator | Monday 02 February 2026 01:05:39 +0000 (0:00:00.506) 0:00:02.301 ******* 2026-02-02 01:07:35.922363 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-02 01:07:35.922369 | orchestrator | 2026-02-02 01:07:35.922376 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting endpoints] ************** 2026-02-02 01:07:35.922382 | orchestrator | Monday 02 February 2026 01:05:42 +0000 (0:00:03.194) 0:00:05.496 ******* 2026-02-02 01:07:35.922388 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-02 01:07:35.922396 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-02 01:07:35.922403 | orchestrator | 2026-02-02 01:07:35.922410 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-02 01:07:35.922427 | orchestrator | Monday 02 February 2026 01:05:49 +0000 (0:00:06.322) 0:00:11.818 ******* 2026-02-02 01:07:35.922433 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 01:07:35.922441 | orchestrator | 2026-02-02 01:07:35.922445 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-02 01:07:35.922455 | orchestrator | Monday 02 February 2026 01:05:52 +0000 (0:00:03.613) 0:00:15.431 ******* 2026-02-02 01:07:35.922459 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-02 01:07:35.922463 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 01:07:35.922467 | orchestrator | 2026-02-02 01:07:35.922471 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-02 01:07:35.922475 | orchestrator | Monday 02 February 2026 01:05:56 +0000 (0:00:03.787) 0:00:19.219 ******* 2026-02-02 01:07:35.922479 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 01:07:35.922483 | orchestrator | 2026-02-02 01:07:35.922487 | orchestrator | TASK [service-ks-register : magnum | Granting/revoking user roles] ************* 2026-02-02 01:07:35.922491 | orchestrator | Monday 02 February 2026 01:06:00 +0000 (0:00:03.602) 0:00:22.821 ******* 2026-02-02 01:07:35.922495 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-02 01:07:35.922499 | orchestrator | 2026-02-02 01:07:35.922503 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-02 01:07:35.922507 | orchestrator | Monday 02 February 2026 01:06:04 +0000 (0:00:04.168) 0:00:26.990 ******* 2026-02-02 01:07:35.922511 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:07:35.922515 | orchestrator | 2026-02-02 01:07:35.922519 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-02 01:07:35.922531 | orchestrator | Monday 02 February 2026 01:06:07 +0000 (0:00:03.080) 0:00:30.071 ******* 2026-02-02 01:07:35.922536 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:07:35.922540 | orchestrator | 2026-02-02 01:07:35.922544 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-02 01:07:35.922548 | orchestrator | Monday 02 February 2026 01:06:11 +0000 (0:00:03.943) 0:00:34.015 ******* 2026-02-02 01:07:35.922556 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:07:35.922561 | orchestrator | 2026-02-02 01:07:35.922565 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-02 01:07:35.922569 | orchestrator | Monday 02 February 2026 01:06:14 +0000 (0:00:03.436) 0:00:37.452 ******* 2026-02-02 01:07:35.922582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.922623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.922632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.922643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.922660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.922665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.922670 | orchestrator | 2026-02-02 01:07:35.922675 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-02 01:07:35.922680 | orchestrator | Monday 02 February 2026 01:06:16 +0000 (0:00:01.200) 0:00:38.652 ******* 2026-02-02 01:07:35.922685 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:07:35.922690 | orchestrator | 2026-02-02 01:07:35.922694 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-02 01:07:35.922699 | orchestrator | Monday 02 February 2026 01:06:16 +0000 (0:00:00.132) 0:00:38.785 ******* 2026-02-02 01:07:35.922703 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:07:35.922708 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:07:35.922712 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:07:35.922717 | orchestrator | 2026-02-02 01:07:35.922722 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-02 01:07:35.922726 | orchestrator | Monday 02 February 2026 01:06:16 +0000 (0:00:00.447) 0:00:39.232 ******* 2026-02-02 01:07:35.922731 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 01:07:35.922736 | orchestrator | 2026-02-02 01:07:35.922741 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-02 01:07:35.922745 | orchestrator | Monday 02 February 2026 01:06:17 +0000 (0:00:00.874) 0:00:40.106 ******* 2026-02-02 01:07:35.922750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.922760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.922771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.922777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.922782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.922787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.922795 | orchestrator | 2026-02-02 01:07:35.922800 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-02 01:07:35.922805 | orchestrator | Monday 02 February 2026 01:06:19 +0000 (0:00:02.272) 0:00:42.379 ******* 2026-02-02 01:07:35.922809 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:07:35.922814 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:07:35.922819 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:07:35.922824 | orchestrator | 2026-02-02 01:07:35.922828 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-02 01:07:35.922833 | orchestrator | Monday 02 February 2026 01:06:20 +0000 (0:00:00.510) 0:00:42.890 ******* 2026-02-02 01:07:35.922840 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:07:35.922845 | orchestrator | 2026-02-02 01:07:35.922850 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-02 01:07:35.922855 | orchestrator | Monday 02 February 2026 01:06:20 +0000 (0:00:00.638) 0:00:43.528 ******* 2026-02-02 01:07:35.922890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.922896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.922902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.922910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.922919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.922932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.922938 | orchestrator | 2026-02-02 01:07:35.922942 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-02 01:07:35.922947 | orchestrator | Monday 02 February 2026 01:06:23 +0000 (0:00:02.196) 0:00:45.725 ******* 2026-02-02 01:07:35.922952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:07:35.922958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:07:35.922970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:07:35.922978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:07:35.922983 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:07:35.922988 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:07:35.922993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:07:35.922999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:07:35.923007 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:07:35.923011 | orchestrator | 2026-02-02 01:07:35.923015 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-02 01:07:35.923019 | orchestrator | Monday 02 February 2026 01:06:24 +0000 (0:00:01.176) 0:00:46.901 ******* 2026-02-02 01:07:35.923023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:07:35.923034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:07:35.923039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:07:35.923044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:07:35.923051 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:07:35.923055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:07:35.923060 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:07:35.923067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:07:35.923071 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:07:35.923075 | orchestrator | 2026-02-02 01:07:35.923079 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-02 01:07:35.923083 | orchestrator | Monday 02 February 2026 01:06:25 +0000 (0:00:01.450) 0:00:48.351 ******* 2026-02-02 01:07:35.923090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.923095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.923102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.923107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.923117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.923121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.923125 | orchestrator | 2026-02-02 01:07:35.923130 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-02 01:07:35.923134 | orchestrator | Monday 02 February 2026 01:06:27 +0000 (0:00:02.214) 0:00:50.566 ******* 2026-02-02 01:07:35.923138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.923146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.923155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.923162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.923166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.923173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.923178 | orchestrator | 2026-02-02 01:07:35.923182 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-02 01:07:35.923186 | orchestrator | Monday 02 February 2026 01:06:35 +0000 (0:00:07.430) 0:00:57.996 ******* 2026-02-02 01:07:35.923190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:07:35.923201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:07:35.923208 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:07:35.923219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:07:35.923232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:07:35.923240 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:07:35.923247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:07:35.923258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:07:35.923264 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:07:35.923270 | orchestrator | 2026-02-02 01:07:35.923278 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-02-02 01:07:35.923284 | orchestrator | Monday 02 February 2026 01:06:37 +0000 (0:00:01.806) 0:00:59.803 ******* 2026-02-02 01:07:35.923295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.923307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.923314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:07:35.923321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.923336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.923344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:07:35.923355 | orchestrator | 2026-02-02 01:07:35.923362 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-02-02 01:07:35.923368 | orchestrator | Monday 02 February 2026 01:06:40 +0000 (0:00:03.346) 0:01:03.149 ******* 2026-02-02 01:07:35.923375 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:07:35.923382 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:07:35.923388 | orchestrator | } 2026-02-02 01:07:35.923395 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:07:35.923402 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:07:35.923409 | orchestrator | } 2026-02-02 01:07:35.923416 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:07:35.923422 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:07:35.923428 | orchestrator | } 2026-02-02 01:07:35.923435 | orchestrator | 2026-02-02 01:07:35.923441 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:07:35.923445 | orchestrator | Monday 02 February 2026 01:06:40 +0000 (0:00:00.282) 0:01:03.432 ******* 2026-02-02 01:07:35.923449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:07:35.923454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:07:35.923458 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:07:35.923471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:07:35.923480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:07:35.923485 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:07:35.923489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:07:35.923494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:07:35.923498 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:07:35.923502 | orchestrator | 2026-02-02 01:07:35.923506 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-02 01:07:35.923510 | orchestrator | Monday 02 February 2026 01:06:41 +0000 (0:00:01.090) 0:01:04.522 ******* 2026-02-02 01:07:35.923514 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:07:35.923519 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:07:35.923526 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:07:35.923532 | orchestrator | 2026-02-02 01:07:35.923538 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-02 01:07:35.923545 | orchestrator | Monday 02 February 2026 01:06:42 +0000 (0:00:00.874) 0:01:05.397 ******* 2026-02-02 01:07:35.923551 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:07:35.923559 | orchestrator | 2026-02-02 01:07:35.923565 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-02 01:07:35.923570 | orchestrator | Monday 02 February 2026 01:06:44 +0000 (0:00:02.021) 0:01:07.419 ******* 2026-02-02 01:07:35.923580 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:07:35.923602 | orchestrator | 2026-02-02 01:07:35.923612 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-02 01:07:35.923618 | orchestrator | Monday 02 February 2026 01:06:46 +0000 (0:00:02.050) 0:01:09.469 ******* 2026-02-02 01:07:35.923624 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:07:35.923630 | orchestrator | 2026-02-02 01:07:35.923636 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-02 01:07:35.923641 | orchestrator | Monday 02 February 2026 01:07:02 +0000 (0:00:15.228) 0:01:24.698 ******* 2026-02-02 01:07:35.923647 | orchestrator | 2026-02-02 01:07:35.923653 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-02 01:07:35.923659 | orchestrator | Monday 02 February 2026 01:07:02 +0000 (0:00:00.066) 0:01:24.764 ******* 2026-02-02 01:07:35.923665 | orchestrator | 2026-02-02 01:07:35.923671 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-02 01:07:35.923677 | orchestrator | Monday 02 February 2026 01:07:02 +0000 (0:00:00.075) 0:01:24.840 ******* 2026-02-02 01:07:35.923683 | orchestrator | 2026-02-02 01:07:35.923694 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-02 01:07:35.923700 | orchestrator | Monday 02 February 2026 01:07:02 +0000 (0:00:00.112) 0:01:24.952 ******* 2026-02-02 01:07:35.923706 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:07:35.923713 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:07:35.923718 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:07:35.923724 | orchestrator | 2026-02-02 01:07:35.923730 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-02 01:07:35.923737 | orchestrator | Monday 02 February 2026 01:07:14 +0000 (0:00:12.039) 0:01:36.992 ******* 2026-02-02 01:07:35.923744 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:07:35.923750 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:07:35.923756 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:07:35.923762 | orchestrator | 2026-02-02 01:07:35.923768 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:07:35.923775 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 01:07:35.923782 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 01:07:35.923789 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 01:07:35.923795 | orchestrator | 2026-02-02 01:07:35.923801 | orchestrator | 2026-02-02 01:07:35.923808 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:07:35.923813 | orchestrator | Monday 02 February 2026 01:07:33 +0000 (0:00:18.968) 0:01:55.961 ******* 2026-02-02 01:07:35.923821 | orchestrator | =============================================================================== 2026-02-02 01:07:35.923827 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 18.97s 2026-02-02 01:07:35.923833 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.23s 2026-02-02 01:07:35.923840 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.04s 2026-02-02 01:07:35.923846 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.43s 2026-02-02 01:07:35.923852 | orchestrator | service-ks-register : magnum | Creating/deleting endpoints -------------- 6.32s 2026-02-02 01:07:35.923858 | orchestrator | service-ks-register : magnum | Granting/revoking user roles ------------- 4.17s 2026-02-02 01:07:35.923864 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.94s 2026-02-02 01:07:35.923871 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.79s 2026-02-02 01:07:35.923877 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.61s 2026-02-02 01:07:35.923889 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.60s 2026-02-02 01:07:35.923895 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.44s 2026-02-02 01:07:35.923901 | orchestrator | service-check-containers : magnum | Check containers -------------------- 3.35s 2026-02-02 01:07:35.923907 | orchestrator | service-ks-register : magnum | Creating/deleting services --------------- 3.19s 2026-02-02 01:07:35.923913 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.08s 2026-02-02 01:07:35.923919 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.27s 2026-02-02 01:07:35.923924 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.21s 2026-02-02 01:07:35.923930 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.20s 2026-02-02 01:07:35.923937 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.05s 2026-02-02 01:07:35.923943 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.02s 2026-02-02 01:07:35.923948 | orchestrator | magnum : Copying over existing policy file ------------------------------ 1.81s 2026-02-02 01:07:35.923954 | orchestrator | 2026-02-02 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:38.970915 | orchestrator | 2026-02-02 01:07:38 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:38.972983 | orchestrator | 2026-02-02 01:07:38 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:38.976763 | orchestrator | 2026-02-02 01:07:38 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:07:38.979900 | orchestrator | 2026-02-02 01:07:38 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:07:38.980289 | orchestrator | 2026-02-02 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:42.036041 | orchestrator | 2026-02-02 01:07:42 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:42.037692 | orchestrator | 2026-02-02 01:07:42 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:42.039557 | orchestrator | 2026-02-02 01:07:42 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:07:42.042284 | orchestrator | 2026-02-02 01:07:42 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:07:42.042341 | orchestrator | 2026-02-02 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:45.082652 | orchestrator | 2026-02-02 01:07:45 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:45.084569 | orchestrator | 2026-02-02 01:07:45 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:45.085957 | orchestrator | 2026-02-02 01:07:45 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:07:45.087263 | orchestrator | 2026-02-02 01:07:45 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:07:45.087279 | orchestrator | 2026-02-02 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:48.123612 | orchestrator | 2026-02-02 01:07:48 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:48.123995 | orchestrator | 2026-02-02 01:07:48 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:48.125315 | orchestrator | 2026-02-02 01:07:48 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:07:48.126619 | orchestrator | 2026-02-02 01:07:48 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:07:48.126695 | orchestrator | 2026-02-02 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:51.166090 | orchestrator | 2026-02-02 01:07:51 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:51.168866 | orchestrator | 2026-02-02 01:07:51 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:51.171287 | orchestrator | 2026-02-02 01:07:51 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:07:51.173394 | orchestrator | 2026-02-02 01:07:51 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:07:51.173457 | orchestrator | 2026-02-02 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:54.221645 | orchestrator | 2026-02-02 01:07:54 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:54.223326 | orchestrator | 2026-02-02 01:07:54 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:54.227763 | orchestrator | 2026-02-02 01:07:54 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:07:54.230725 | orchestrator | 2026-02-02 01:07:54 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:07:54.231333 | orchestrator | 2026-02-02 01:07:54 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:07:57.320289 | orchestrator | 2026-02-02 01:07:57 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:07:57.322869 | orchestrator | 2026-02-02 01:07:57 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:07:57.324238 | orchestrator | 2026-02-02 01:07:57 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:07:57.325909 | orchestrator | 2026-02-02 01:07:57 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:07:57.325939 | orchestrator | 2026-02-02 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:00.374825 | orchestrator | 2026-02-02 01:08:00 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state STARTED 2026-02-02 01:08:00.379824 | orchestrator | 2026-02-02 01:08:00 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:00.380657 | orchestrator | 2026-02-02 01:08:00 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:00.381704 | orchestrator | 2026-02-02 01:08:00 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:00.383139 | orchestrator | 2026-02-02 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:03.428294 | orchestrator | 2026-02-02 01:08:03 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:03.431016 | orchestrator | 2026-02-02 01:08:03 | INFO  | Task c7f70a08-c689-41bb-8ca4-28b7bec560f3 is in state SUCCESS 2026-02-02 01:08:03.431304 | orchestrator | 2026-02-02 01:08:03.432727 | orchestrator | 2026-02-02 01:08:03.433221 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:08:03.433249 | orchestrator | 2026-02-02 01:08:03.433260 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:08:03.433272 | orchestrator | Monday 02 February 2026 01:03:25 +0000 (0:00:00.365) 0:00:00.365 ******* 2026-02-02 01:08:03.433283 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:08:03.433294 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:08:03.433320 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:08:03.433331 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:08:03.433650 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:08:03.433665 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:08:03.433702 | orchestrator | 2026-02-02 01:08:03.433713 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:08:03.433723 | orchestrator | Monday 02 February 2026 01:03:26 +0000 (0:00:00.730) 0:00:01.095 ******* 2026-02-02 01:08:03.433733 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-02 01:08:03.433743 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-02 01:08:03.433753 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-02 01:08:03.433762 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-02 01:08:03.433772 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-02 01:08:03.433781 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-02 01:08:03.433791 | orchestrator | 2026-02-02 01:08:03.433800 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-02 01:08:03.433810 | orchestrator | 2026-02-02 01:08:03.433819 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-02 01:08:03.433829 | orchestrator | Monday 02 February 2026 01:03:26 +0000 (0:00:00.696) 0:00:01.792 ******* 2026-02-02 01:08:03.433840 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 01:08:03.433852 | orchestrator | 2026-02-02 01:08:03.433906 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-02 01:08:03.433920 | orchestrator | Monday 02 February 2026 01:03:27 +0000 (0:00:01.204) 0:00:02.996 ******* 2026-02-02 01:08:03.433930 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:08:03.433940 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:08:03.433950 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:08:03.433959 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:08:03.433969 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:08:03.433979 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:08:03.433989 | orchestrator | 2026-02-02 01:08:03.433998 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-02 01:08:03.434008 | orchestrator | Monday 02 February 2026 01:03:29 +0000 (0:00:01.419) 0:00:04.415 ******* 2026-02-02 01:08:03.434070 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:08:03.434081 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:08:03.434091 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:08:03.434100 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:08:03.434110 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:08:03.434119 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:08:03.434129 | orchestrator | 2026-02-02 01:08:03.434139 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-02 01:08:03.434149 | orchestrator | Monday 02 February 2026 01:03:30 +0000 (0:00:01.151) 0:00:05.567 ******* 2026-02-02 01:08:03.434158 | orchestrator | ok: [testbed-node-0] => { 2026-02-02 01:08:03.434169 | orchestrator |  "changed": false, 2026-02-02 01:08:03.434179 | orchestrator |  "msg": "All assertions passed" 2026-02-02 01:08:03.434189 | orchestrator | } 2026-02-02 01:08:03.434199 | orchestrator | ok: [testbed-node-1] => { 2026-02-02 01:08:03.434208 | orchestrator |  "changed": false, 2026-02-02 01:08:03.434218 | orchestrator |  "msg": "All assertions passed" 2026-02-02 01:08:03.434227 | orchestrator | } 2026-02-02 01:08:03.434237 | orchestrator | ok: [testbed-node-2] => { 2026-02-02 01:08:03.434246 | orchestrator |  "changed": false, 2026-02-02 01:08:03.434256 | orchestrator |  "msg": "All assertions passed" 2026-02-02 01:08:03.434266 | orchestrator | } 2026-02-02 01:08:03.434275 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 01:08:03.434285 | orchestrator |  "changed": false, 2026-02-02 01:08:03.434295 | orchestrator |  "msg": "All assertions passed" 2026-02-02 01:08:03.434305 | orchestrator | } 2026-02-02 01:08:03.434314 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 01:08:03.434324 | orchestrator |  "changed": false, 2026-02-02 01:08:03.434333 | orchestrator |  "msg": "All assertions passed" 2026-02-02 01:08:03.434343 | orchestrator | } 2026-02-02 01:08:03.434353 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 01:08:03.434372 | orchestrator |  "changed": false, 2026-02-02 01:08:03.434381 | orchestrator |  "msg": "All assertions passed" 2026-02-02 01:08:03.434391 | orchestrator | } 2026-02-02 01:08:03.434401 | orchestrator | 2026-02-02 01:08:03.434410 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-02 01:08:03.434420 | orchestrator | Monday 02 February 2026 01:03:31 +0000 (0:00:00.978) 0:00:06.546 ******* 2026-02-02 01:08:03.434430 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.434439 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.434449 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.434458 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.434468 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.434477 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.434487 | orchestrator | 2026-02-02 01:08:03.434497 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-02-02 01:08:03.434507 | orchestrator | Monday 02 February 2026 01:03:32 +0000 (0:00:00.799) 0:00:07.345 ******* 2026-02-02 01:08:03.434516 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-02 01:08:03.434526 | orchestrator | 2026-02-02 01:08:03.434536 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting endpoints] ************* 2026-02-02 01:08:03.434545 | orchestrator | Monday 02 February 2026 01:03:35 +0000 (0:00:03.525) 0:00:10.871 ******* 2026-02-02 01:08:03.434556 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-02 01:08:03.434593 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-02 01:08:03.434605 | orchestrator | 2026-02-02 01:08:03.434656 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-02 01:08:03.434670 | orchestrator | Monday 02 February 2026 01:03:42 +0000 (0:00:07.065) 0:00:17.937 ******* 2026-02-02 01:08:03.434681 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 01:08:03.434693 | orchestrator | 2026-02-02 01:08:03.434705 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-02 01:08:03.434718 | orchestrator | Monday 02 February 2026 01:03:46 +0000 (0:00:03.460) 0:00:21.398 ******* 2026-02-02 01:08:03.434738 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-02 01:08:03.434751 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 01:08:03.434764 | orchestrator | 2026-02-02 01:08:03.434777 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-02 01:08:03.434788 | orchestrator | Monday 02 February 2026 01:03:50 +0000 (0:00:03.782) 0:00:25.180 ******* 2026-02-02 01:08:03.434801 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 01:08:03.434814 | orchestrator | 2026-02-02 01:08:03.434826 | orchestrator | TASK [service-ks-register : neutron | Granting/revoking user roles] ************ 2026-02-02 01:08:03.434838 | orchestrator | Monday 02 February 2026 01:03:53 +0000 (0:00:03.385) 0:00:28.566 ******* 2026-02-02 01:08:03.434849 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-02 01:08:03.434860 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-02 01:08:03.434872 | orchestrator | 2026-02-02 01:08:03.434883 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-02 01:08:03.434895 | orchestrator | Monday 02 February 2026 01:04:01 +0000 (0:00:07.932) 0:00:36.498 ******* 2026-02-02 01:08:03.434907 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.434918 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.434928 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.434938 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.434947 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.434957 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.434967 | orchestrator | 2026-02-02 01:08:03.434976 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-02 01:08:03.434993 | orchestrator | Monday 02 February 2026 01:04:02 +0000 (0:00:00.972) 0:00:37.470 ******* 2026-02-02 01:08:03.435003 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.435013 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.435022 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.435032 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.435042 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.435051 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.435061 | orchestrator | 2026-02-02 01:08:03.435070 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-02 01:08:03.435080 | orchestrator | Monday 02 February 2026 01:04:05 +0000 (0:00:03.047) 0:00:40.518 ******* 2026-02-02 01:08:03.435090 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:08:03.435099 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:08:03.435109 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:08:03.435119 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:08:03.435128 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:08:03.435138 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:08:03.435147 | orchestrator | 2026-02-02 01:08:03.435157 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-02 01:08:03.435166 | orchestrator | Monday 02 February 2026 01:04:07 +0000 (0:00:01.581) 0:00:42.100 ******* 2026-02-02 01:08:03.435177 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.435186 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.435196 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.435206 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.435215 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.435225 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.435235 | orchestrator | 2026-02-02 01:08:03.435244 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-02 01:08:03.435254 | orchestrator | Monday 02 February 2026 01:04:10 +0000 (0:00:03.194) 0:00:45.294 ******* 2026-02-02 01:08:03.435268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.435323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.435337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.435364 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.435377 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.435388 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.435398 | orchestrator | 2026-02-02 01:08:03.435409 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-02 01:08:03.435419 | orchestrator | Monday 02 February 2026 01:04:13 +0000 (0:00:03.598) 0:00:48.893 ******* 2026-02-02 01:08:03.435429 | orchestrator | [WARNING]: Skipped 2026-02-02 01:08:03.435439 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-02 01:08:03.435475 | orchestrator | due to this access issue: 2026-02-02 01:08:03.435487 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-02 01:08:03.435496 | orchestrator | a directory 2026-02-02 01:08:03.435506 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 01:08:03.435523 | orchestrator | 2026-02-02 01:08:03.435532 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-02 01:08:03.435547 | orchestrator | Monday 02 February 2026 01:04:14 +0000 (0:00:01.046) 0:00:49.940 ******* 2026-02-02 01:08:03.435557 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 01:08:03.435605 | orchestrator | 2026-02-02 01:08:03.435615 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-02 01:08:03.435625 | orchestrator | Monday 02 February 2026 01:04:16 +0000 (0:00:01.384) 0:00:51.325 ******* 2026-02-02 01:08:03.435636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.435648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.435659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.435730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.435753 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.435764 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.435774 | orchestrator | 2026-02-02 01:08:03.435784 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-02 01:08:03.435794 | orchestrator | Monday 02 February 2026 01:04:20 +0000 (0:00:04.494) 0:00:55.819 ******* 2026-02-02 01:08:03.435805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.435815 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.435861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.435879 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.435926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.435939 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.435949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.435959 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.435970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.435979 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.435990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.436000 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.436011 | orchestrator | 2026-02-02 01:08:03.436020 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-02 01:08:03.436037 | orchestrator | Monday 02 February 2026 01:04:24 +0000 (0:00:04.165) 0:00:59.985 ******* 2026-02-02 01:08:03.436152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.436169 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.436179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.436190 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.436201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.436211 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.436222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.436239 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.436276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.436288 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.436303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.436314 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.436323 | orchestrator | 2026-02-02 01:08:03.436333 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-02 01:08:03.436350 | orchestrator | Monday 02 February 2026 01:04:28 +0000 (0:00:03.308) 0:01:03.294 ******* 2026-02-02 01:08:03.436366 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.436383 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.436409 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.436428 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.436444 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.436459 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.436474 | orchestrator | 2026-02-02 01:08:03.436490 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-02 01:08:03.436506 | orchestrator | Monday 02 February 2026 01:04:31 +0000 (0:00:03.128) 0:01:06.423 ******* 2026-02-02 01:08:03.436521 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.436536 | orchestrator | 2026-02-02 01:08:03.436549 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-02 01:08:03.436590 | orchestrator | Monday 02 February 2026 01:04:31 +0000 (0:00:00.184) 0:01:06.608 ******* 2026-02-02 01:08:03.436608 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.436622 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.436637 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.436653 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.436669 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.436685 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.436700 | orchestrator | 2026-02-02 01:08:03.436716 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-02 01:08:03.436733 | orchestrator | Monday 02 February 2026 01:04:32 +0000 (0:00:01.014) 0:01:07.623 ******* 2026-02-02 01:08:03.436751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.436782 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.436793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.436804 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.436868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.436881 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.436892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.436902 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.436913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.436928 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.436939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.436949 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.436958 | orchestrator | 2026-02-02 01:08:03.436968 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-02 01:08:03.436978 | orchestrator | Monday 02 February 2026 01:04:35 +0000 (0:00:03.115) 0:01:10.738 ******* 2026-02-02 01:08:03.436999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.437012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.437023 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.437045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.437056 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.437081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.437092 | orchestrator | 2026-02-02 01:08:03.437102 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-02 01:08:03.437112 | orchestrator | Monday 02 February 2026 01:04:40 +0000 (0:00:04.687) 0:01:15.425 ******* 2026-02-02 01:08:03.437122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.437140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.437151 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.437173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.437184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.437195 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.437217 | orchestrator | 2026-02-02 01:08:03.437227 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-02 01:08:03.437237 | orchestrator | Monday 02 February 2026 01:04:48 +0000 (0:00:07.783) 0:01:23.209 ******* 2026-02-02 01:08:03.437247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.437257 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.437274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.437285 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.437300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.437311 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.437321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.437339 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.437350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.437360 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.437370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.437380 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.437390 | orchestrator | 2026-02-02 01:08:03.437400 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-02 01:08:03.437410 | orchestrator | Monday 02 February 2026 01:04:51 +0000 (0:00:02.968) 0:01:26.178 ******* 2026-02-02 01:08:03.437420 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.437429 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.437439 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.437449 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:08:03.437459 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:08:03.437468 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:08:03.437478 | orchestrator | 2026-02-02 01:08:03.437488 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-02 01:08:03.437502 | orchestrator | Monday 02 February 2026 01:04:54 +0000 (0:00:02.989) 0:01:29.167 ******* 2026-02-02 01:08:03.437517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.437534 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.437544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.437554 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.437590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.437609 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.437623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.437643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.437682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.437706 | orchestrator | 2026-02-02 01:08:03.437725 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-02 01:08:03.437741 | orchestrator | Monday 02 February 2026 01:04:58 +0000 (0:00:03.974) 0:01:33.142 ******* 2026-02-02 01:08:03.437757 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.437773 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.437790 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.437807 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.437822 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.437840 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.437858 | orchestrator | 2026-02-02 01:08:03.437874 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-02 01:08:03.437892 | orchestrator | Monday 02 February 2026 01:05:00 +0000 (0:00:02.376) 0:01:35.519 ******* 2026-02-02 01:08:03.437908 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.437924 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.437936 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.437946 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.437956 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.437972 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.437988 | orchestrator | 2026-02-02 01:08:03.438004 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-02 01:08:03.438094 | orchestrator | Monday 02 February 2026 01:05:02 +0000 (0:00:02.446) 0:01:37.965 ******* 2026-02-02 01:08:03.438112 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.438122 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.438131 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.438141 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.438151 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.438160 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.438170 | orchestrator | 2026-02-02 01:08:03.438180 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-02 01:08:03.438189 | orchestrator | Monday 02 February 2026 01:05:05 +0000 (0:00:02.502) 0:01:40.468 ******* 2026-02-02 01:08:03.438199 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.438209 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.438218 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.438228 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.438237 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.438247 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.438256 | orchestrator | 2026-02-02 01:08:03.438266 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-02 01:08:03.438276 | orchestrator | Monday 02 February 2026 01:05:08 +0000 (0:00:02.797) 0:01:43.266 ******* 2026-02-02 01:08:03.438285 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.438295 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.438304 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.438314 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.438324 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.438342 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.438352 | orchestrator | 2026-02-02 01:08:03.438362 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-02 01:08:03.438372 | orchestrator | Monday 02 February 2026 01:05:10 +0000 (0:00:02.179) 0:01:45.445 ******* 2026-02-02 01:08:03.438382 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-02 01:08:03.438392 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.438402 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-02 01:08:03.438412 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.438422 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-02 01:08:03.438431 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.438441 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-02 01:08:03.438451 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.438460 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-02 01:08:03.438470 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.438489 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-02 01:08:03.438499 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.438509 | orchestrator | 2026-02-02 01:08:03.438519 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-02 01:08:03.438528 | orchestrator | Monday 02 February 2026 01:05:13 +0000 (0:00:03.000) 0:01:48.446 ******* 2026-02-02 01:08:03.438545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.438557 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.438592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.438605 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.438615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.438632 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.438649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.438660 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.438675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.438686 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.438696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.438706 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.438716 | orchestrator | 2026-02-02 01:08:03.438726 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-02 01:08:03.438736 | orchestrator | Monday 02 February 2026 01:05:16 +0000 (0:00:03.057) 0:01:51.504 ******* 2026-02-02 01:08:03.438746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.438767 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.438778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.438788 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.438810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.438821 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.438831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.438842 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.438853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.438869 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.438880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.438890 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.438904 | orchestrator | 2026-02-02 01:08:03.438919 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-02 01:08:03.438935 | orchestrator | Monday 02 February 2026 01:05:18 +0000 (0:00:02.160) 0:01:53.664 ******* 2026-02-02 01:08:03.438951 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.438967 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.438983 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.438999 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.439009 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.439019 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.439029 | orchestrator | 2026-02-02 01:08:03.439039 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-02 01:08:03.439049 | orchestrator | Monday 02 February 2026 01:05:20 +0000 (0:00:02.410) 0:01:56.075 ******* 2026-02-02 01:08:03.439059 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.439068 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.439078 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.439088 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:08:03.439098 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:08:03.439108 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:08:03.439117 | orchestrator | 2026-02-02 01:08:03.439133 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-02 01:08:03.439143 | orchestrator | Monday 02 February 2026 01:05:27 +0000 (0:00:06.328) 0:02:02.403 ******* 2026-02-02 01:08:03.439153 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.439163 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.439172 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.439182 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.439192 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.439207 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.439217 | orchestrator | 2026-02-02 01:08:03.439227 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-02 01:08:03.439237 | orchestrator | Monday 02 February 2026 01:05:29 +0000 (0:00:02.627) 0:02:05.030 ******* 2026-02-02 01:08:03.439247 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.439256 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.439266 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.439276 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.439285 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.439295 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.439305 | orchestrator | 2026-02-02 01:08:03.439315 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-02 01:08:03.439333 | orchestrator | Monday 02 February 2026 01:05:33 +0000 (0:00:03.823) 0:02:08.854 ******* 2026-02-02 01:08:03.439343 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.439352 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.439362 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.439372 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.439381 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.439391 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.439401 | orchestrator | 2026-02-02 01:08:03.439411 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-02 01:08:03.439421 | orchestrator | Monday 02 February 2026 01:05:37 +0000 (0:00:03.689) 0:02:12.543 ******* 2026-02-02 01:08:03.439431 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.439441 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.439450 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.439460 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.439470 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.439479 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.439489 | orchestrator | 2026-02-02 01:08:03.439499 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-02 01:08:03.439509 | orchestrator | Monday 02 February 2026 01:05:39 +0000 (0:00:02.128) 0:02:14.671 ******* 2026-02-02 01:08:03.439518 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.439528 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.439538 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.439547 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.439557 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.439635 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.439652 | orchestrator | 2026-02-02 01:08:03.439665 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-02 01:08:03.439675 | orchestrator | Monday 02 February 2026 01:05:41 +0000 (0:00:01.881) 0:02:16.553 ******* 2026-02-02 01:08:03.439685 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.439695 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.439704 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.439714 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.439723 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.439733 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.439742 | orchestrator | 2026-02-02 01:08:03.439752 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-02 01:08:03.439767 | orchestrator | Monday 02 February 2026 01:05:44 +0000 (0:00:03.011) 0:02:19.564 ******* 2026-02-02 01:08:03.439787 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.439811 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.439825 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.439840 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.439854 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.439869 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.439884 | orchestrator | 2026-02-02 01:08:03.439898 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-02 01:08:03.439912 | orchestrator | Monday 02 February 2026 01:05:46 +0000 (0:00:02.154) 0:02:21.719 ******* 2026-02-02 01:08:03.439925 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-02 01:08:03.439942 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-02 01:08:03.439957 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.439972 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.439989 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-02 01:08:03.440006 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.440021 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-02 01:08:03.440050 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.440073 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-02 01:08:03.440092 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.440108 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-02 01:08:03.440123 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.440139 | orchestrator | 2026-02-02 01:08:03.440152 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-02 01:08:03.440164 | orchestrator | Monday 02 February 2026 01:05:48 +0000 (0:00:02.248) 0:02:23.967 ******* 2026-02-02 01:08:03.440196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.440213 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.440226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.440240 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.440249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.440257 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.440266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.440281 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.440301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.440310 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.440318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.440326 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.440334 | orchestrator | 2026-02-02 01:08:03.440342 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-02-02 01:08:03.440350 | orchestrator | Monday 02 February 2026 01:05:51 +0000 (0:00:02.315) 0:02:26.283 ******* 2026-02-02 01:08:03.440358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.440368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.440389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:08:03.440403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.440413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.440421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 01:08:03.440435 | orchestrator | 2026-02-02 01:08:03.440443 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-02-02 01:08:03.440452 | orchestrator | Monday 02 February 2026 01:05:54 +0000 (0:00:03.124) 0:02:29.407 ******* 2026-02-02 01:08:03.440459 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:08:03.440468 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:08:03.440476 | orchestrator | } 2026-02-02 01:08:03.440483 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:08:03.440491 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:08:03.440499 | orchestrator | } 2026-02-02 01:08:03.440507 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:08:03.440515 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:08:03.440523 | orchestrator | } 2026-02-02 01:08:03.440531 | orchestrator | changed: [testbed-node-3] => { 2026-02-02 01:08:03.440539 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:08:03.440547 | orchestrator | } 2026-02-02 01:08:03.440555 | orchestrator | changed: [testbed-node-4] => { 2026-02-02 01:08:03.440586 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:08:03.440595 | orchestrator | } 2026-02-02 01:08:03.440603 | orchestrator | changed: [testbed-node-5] => { 2026-02-02 01:08:03.440611 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:08:03.440619 | orchestrator | } 2026-02-02 01:08:03.440626 | orchestrator | 2026-02-02 01:08:03.440634 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:08:03.440642 | orchestrator | Monday 02 February 2026 01:05:55 +0000 (0:00:00.754) 0:02:30.162 ******* 2026-02-02 01:08:03.440662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.440671 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.440680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.440689 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.440697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.440711 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.440719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:08:03.440728 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.440736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.440745 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.440762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 01:08:03.440771 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.440779 | orchestrator | 2026-02-02 01:08:03.440787 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-02 01:08:03.440795 | orchestrator | Monday 02 February 2026 01:05:57 +0000 (0:00:02.609) 0:02:32.771 ******* 2026-02-02 01:08:03.440803 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:08:03.440811 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:08:03.440819 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:08:03.440827 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:08:03.440834 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:08:03.440842 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:08:03.440850 | orchestrator | 2026-02-02 01:08:03.440858 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-02 01:08:03.440875 | orchestrator | Monday 02 February 2026 01:05:58 +0000 (0:00:00.688) 0:02:33.460 ******* 2026-02-02 01:08:03.440883 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:08:03.440891 | orchestrator | 2026-02-02 01:08:03.440899 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-02 01:08:03.440906 | orchestrator | Monday 02 February 2026 01:06:00 +0000 (0:00:02.222) 0:02:35.682 ******* 2026-02-02 01:08:03.440914 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:08:03.440922 | orchestrator | 2026-02-02 01:08:03.440930 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-02 01:08:03.440938 | orchestrator | Monday 02 February 2026 01:06:03 +0000 (0:00:02.506) 0:02:38.188 ******* 2026-02-02 01:08:03.440946 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:08:03.440954 | orchestrator | 2026-02-02 01:08:03.440961 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-02 01:08:03.440969 | orchestrator | Monday 02 February 2026 01:06:44 +0000 (0:00:41.684) 0:03:19.873 ******* 2026-02-02 01:08:03.440977 | orchestrator | 2026-02-02 01:08:03.440985 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-02 01:08:03.440993 | orchestrator | Monday 02 February 2026 01:06:44 +0000 (0:00:00.079) 0:03:19.952 ******* 2026-02-02 01:08:03.441001 | orchestrator | 2026-02-02 01:08:03.441009 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-02 01:08:03.441017 | orchestrator | Monday 02 February 2026 01:06:45 +0000 (0:00:00.242) 0:03:20.195 ******* 2026-02-02 01:08:03.441025 | orchestrator | 2026-02-02 01:08:03.441033 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-02 01:08:03.441041 | orchestrator | Monday 02 February 2026 01:06:45 +0000 (0:00:00.080) 0:03:20.275 ******* 2026-02-02 01:08:03.441049 | orchestrator | 2026-02-02 01:08:03.441057 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-02 01:08:03.441065 | orchestrator | Monday 02 February 2026 01:06:45 +0000 (0:00:00.076) 0:03:20.352 ******* 2026-02-02 01:08:03.441072 | orchestrator | 2026-02-02 01:08:03.441080 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-02 01:08:03.441088 | orchestrator | Monday 02 February 2026 01:06:45 +0000 (0:00:00.068) 0:03:20.420 ******* 2026-02-02 01:08:03.441096 | orchestrator | 2026-02-02 01:08:03.441104 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-02 01:08:03.441112 | orchestrator | Monday 02 February 2026 01:06:45 +0000 (0:00:00.093) 0:03:20.514 ******* 2026-02-02 01:08:03.441119 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:08:03.441127 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:08:03.441135 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:08:03.441143 | orchestrator | 2026-02-02 01:08:03.441151 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-02 01:08:03.441159 | orchestrator | Monday 02 February 2026 01:07:09 +0000 (0:00:23.968) 0:03:44.482 ******* 2026-02-02 01:08:03.441167 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:08:03.441175 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:08:03.441182 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:08:03.441190 | orchestrator | 2026-02-02 01:08:03.441198 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:08:03.441206 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-02 01:08:03.441216 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-02 01:08:03.441224 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-02 01:08:03.441232 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-02 01:08:03.441252 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-02 01:08:03.441260 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-02 01:08:03.441268 | orchestrator | 2026-02-02 01:08:03.441276 | orchestrator | 2026-02-02 01:08:03.441284 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:08:03.441297 | orchestrator | Monday 02 February 2026 01:07:59 +0000 (0:00:49.870) 0:04:34.353 ******* 2026-02-02 01:08:03.441305 | orchestrator | =============================================================================== 2026-02-02 01:08:03.441312 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 49.87s 2026-02-02 01:08:03.441320 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.68s 2026-02-02 01:08:03.441328 | orchestrator | neutron : Restart neutron-server container ----------------------------- 23.97s 2026-02-02 01:08:03.441336 | orchestrator | service-ks-register : neutron | Granting/revoking user roles ------------ 7.93s 2026-02-02 01:08:03.441344 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.78s 2026-02-02 01:08:03.441352 | orchestrator | service-ks-register : neutron | Creating/deleting endpoints ------------- 7.07s 2026-02-02 01:08:03.441360 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 6.33s 2026-02-02 01:08:03.441368 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.69s 2026-02-02 01:08:03.441377 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.49s 2026-02-02 01:08:03.441391 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.17s 2026-02-02 01:08:03.441411 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.97s 2026-02-02 01:08:03.441424 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.82s 2026-02-02 01:08:03.441436 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.78s 2026-02-02 01:08:03.441449 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.69s 2026-02-02 01:08:03.441460 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.60s 2026-02-02 01:08:03.441472 | orchestrator | service-ks-register : neutron | Creating/deleting services -------------- 3.53s 2026-02-02 01:08:03.441484 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.46s 2026-02-02 01:08:03.441496 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.39s 2026-02-02 01:08:03.441509 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.31s 2026-02-02 01:08:03.441523 | orchestrator | Setting sysctl values --------------------------------------------------- 3.19s 2026-02-02 01:08:03.441537 | orchestrator | 2026-02-02 01:08:03 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:03.441550 | orchestrator | 2026-02-02 01:08:03 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:03.441586 | orchestrator | 2026-02-02 01:08:03 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:03.441596 | orchestrator | 2026-02-02 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:06.485692 | orchestrator | 2026-02-02 01:08:06 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:06.486191 | orchestrator | 2026-02-02 01:08:06 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:06.487099 | orchestrator | 2026-02-02 01:08:06 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:06.489764 | orchestrator | 2026-02-02 01:08:06 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:06.489878 | orchestrator | 2026-02-02 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:09.529394 | orchestrator | 2026-02-02 01:08:09 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:09.530509 | orchestrator | 2026-02-02 01:08:09 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:09.533425 | orchestrator | 2026-02-02 01:08:09 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:09.537158 | orchestrator | 2026-02-02 01:08:09 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:09.537221 | orchestrator | 2026-02-02 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:12.577830 | orchestrator | 2026-02-02 01:08:12 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:12.579759 | orchestrator | 2026-02-02 01:08:12 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:12.580728 | orchestrator | 2026-02-02 01:08:12 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:12.581903 | orchestrator | 2026-02-02 01:08:12 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:12.581938 | orchestrator | 2026-02-02 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:15.619839 | orchestrator | 2026-02-02 01:08:15 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:15.620402 | orchestrator | 2026-02-02 01:08:15 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:15.621628 | orchestrator | 2026-02-02 01:08:15 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:15.622482 | orchestrator | 2026-02-02 01:08:15 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:15.622728 | orchestrator | 2026-02-02 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:18.662917 | orchestrator | 2026-02-02 01:08:18 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:18.663815 | orchestrator | 2026-02-02 01:08:18 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:18.664461 | orchestrator | 2026-02-02 01:08:18 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:18.665585 | orchestrator | 2026-02-02 01:08:18 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:18.665633 | orchestrator | 2026-02-02 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:21.703829 | orchestrator | 2026-02-02 01:08:21 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:21.704858 | orchestrator | 2026-02-02 01:08:21 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:21.706885 | orchestrator | 2026-02-02 01:08:21 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:21.709420 | orchestrator | 2026-02-02 01:08:21 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:21.709479 | orchestrator | 2026-02-02 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:24.733929 | orchestrator | 2026-02-02 01:08:24 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:24.735337 | orchestrator | 2026-02-02 01:08:24 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:24.737147 | orchestrator | 2026-02-02 01:08:24 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:24.738613 | orchestrator | 2026-02-02 01:08:24 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:24.738644 | orchestrator | 2026-02-02 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:27.769598 | orchestrator | 2026-02-02 01:08:27 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:27.770243 | orchestrator | 2026-02-02 01:08:27 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:27.773637 | orchestrator | 2026-02-02 01:08:27 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:27.774619 | orchestrator | 2026-02-02 01:08:27 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:27.774663 | orchestrator | 2026-02-02 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:30.825825 | orchestrator | 2026-02-02 01:08:30 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:30.826530 | orchestrator | 2026-02-02 01:08:30 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:30.827353 | orchestrator | 2026-02-02 01:08:30 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:30.828512 | orchestrator | 2026-02-02 01:08:30 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:30.828595 | orchestrator | 2026-02-02 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:33.880386 | orchestrator | 2026-02-02 01:08:33 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:33.882323 | orchestrator | 2026-02-02 01:08:33 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:33.887094 | orchestrator | 2026-02-02 01:08:33 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:33.888211 | orchestrator | 2026-02-02 01:08:33 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:33.888258 | orchestrator | 2026-02-02 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:36.941590 | orchestrator | 2026-02-02 01:08:36 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:36.942290 | orchestrator | 2026-02-02 01:08:36 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:36.942803 | orchestrator | 2026-02-02 01:08:36 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:36.943919 | orchestrator | 2026-02-02 01:08:36 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:36.943955 | orchestrator | 2026-02-02 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:39.968924 | orchestrator | 2026-02-02 01:08:39 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:39.969741 | orchestrator | 2026-02-02 01:08:39 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:39.970634 | orchestrator | 2026-02-02 01:08:39 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:39.972682 | orchestrator | 2026-02-02 01:08:39 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:39.972721 | orchestrator | 2026-02-02 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:43.014287 | orchestrator | 2026-02-02 01:08:43 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:43.015282 | orchestrator | 2026-02-02 01:08:43 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:43.015924 | orchestrator | 2026-02-02 01:08:43 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:43.016588 | orchestrator | 2026-02-02 01:08:43 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:43.016639 | orchestrator | 2026-02-02 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:46.044098 | orchestrator | 2026-02-02 01:08:46 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:46.045360 | orchestrator | 2026-02-02 01:08:46 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:46.046877 | orchestrator | 2026-02-02 01:08:46 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:46.048104 | orchestrator | 2026-02-02 01:08:46 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:46.048139 | orchestrator | 2026-02-02 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:49.100267 | orchestrator | 2026-02-02 01:08:49 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:49.108890 | orchestrator | 2026-02-02 01:08:49 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:49.109083 | orchestrator | 2026-02-02 01:08:49 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:49.112696 | orchestrator | 2026-02-02 01:08:49 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:49.113608 | orchestrator | 2026-02-02 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:52.161405 | orchestrator | 2026-02-02 01:08:52 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:52.161574 | orchestrator | 2026-02-02 01:08:52 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:52.162495 | orchestrator | 2026-02-02 01:08:52 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:52.165131 | orchestrator | 2026-02-02 01:08:52 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:52.165188 | orchestrator | 2026-02-02 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:55.218605 | orchestrator | 2026-02-02 01:08:55 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:55.218717 | orchestrator | 2026-02-02 01:08:55 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:55.218726 | orchestrator | 2026-02-02 01:08:55 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:55.218731 | orchestrator | 2026-02-02 01:08:55 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:55.218736 | orchestrator | 2026-02-02 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:08:58.273260 | orchestrator | 2026-02-02 01:08:58 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:08:58.274493 | orchestrator | 2026-02-02 01:08:58 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:08:58.278781 | orchestrator | 2026-02-02 01:08:58 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:08:58.282353 | orchestrator | 2026-02-02 01:08:58 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:08:58.283107 | orchestrator | 2026-02-02 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:01.359793 | orchestrator | 2026-02-02 01:09:01 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:01.360182 | orchestrator | 2026-02-02 01:09:01 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:09:01.364180 | orchestrator | 2026-02-02 01:09:01 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:01.364252 | orchestrator | 2026-02-02 01:09:01 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:01.364262 | orchestrator | 2026-02-02 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:04.399900 | orchestrator | 2026-02-02 01:09:04 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:04.400674 | orchestrator | 2026-02-02 01:09:04 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:09:04.401483 | orchestrator | 2026-02-02 01:09:04 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:04.406483 | orchestrator | 2026-02-02 01:09:04 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:04.407827 | orchestrator | 2026-02-02 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:07.440455 | orchestrator | 2026-02-02 01:09:07 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:07.440997 | orchestrator | 2026-02-02 01:09:07 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:09:07.441663 | orchestrator | 2026-02-02 01:09:07 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:07.442456 | orchestrator | 2026-02-02 01:09:07 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:07.442607 | orchestrator | 2026-02-02 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:10.483556 | orchestrator | 2026-02-02 01:09:10 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:10.484099 | orchestrator | 2026-02-02 01:09:10 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:09:10.485122 | orchestrator | 2026-02-02 01:09:10 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:10.485611 | orchestrator | 2026-02-02 01:09:10 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:10.485635 | orchestrator | 2026-02-02 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:13.540421 | orchestrator | 2026-02-02 01:09:13 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:13.540793 | orchestrator | 2026-02-02 01:09:13 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:09:13.541438 | orchestrator | 2026-02-02 01:09:13 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:13.542237 | orchestrator | 2026-02-02 01:09:13 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:13.542260 | orchestrator | 2026-02-02 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:16.581646 | orchestrator | 2026-02-02 01:09:16 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:16.583201 | orchestrator | 2026-02-02 01:09:16 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:09:16.585064 | orchestrator | 2026-02-02 01:09:16 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:16.586454 | orchestrator | 2026-02-02 01:09:16 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:16.586516 | orchestrator | 2026-02-02 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:19.618810 | orchestrator | 2026-02-02 01:09:19 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:19.619248 | orchestrator | 2026-02-02 01:09:19 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:09:19.620114 | orchestrator | 2026-02-02 01:09:19 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:19.620977 | orchestrator | 2026-02-02 01:09:19 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:19.621000 | orchestrator | 2026-02-02 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:22.644863 | orchestrator | 2026-02-02 01:09:22 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:22.645183 | orchestrator | 2026-02-02 01:09:22 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state STARTED 2026-02-02 01:09:22.646069 | orchestrator | 2026-02-02 01:09:22 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:22.646515 | orchestrator | 2026-02-02 01:09:22 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:22.646587 | orchestrator | 2026-02-02 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:25.691235 | orchestrator | 2026-02-02 01:09:25 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:25.692917 | orchestrator | 2026-02-02 01:09:25 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:09:25.695690 | orchestrator | 2026-02-02 01:09:25 | INFO  | Task a9d62681-01ac-47d9-9247-fdca135c10f0 is in state SUCCESS 2026-02-02 01:09:25.697596 | orchestrator | 2026-02-02 01:09:25.697657 | orchestrator | 2026-02-02 01:09:25.697668 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:09:25.697679 | orchestrator | 2026-02-02 01:09:25.697688 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:09:25.697697 | orchestrator | Monday 02 February 2026 01:06:20 +0000 (0:00:00.416) 0:00:00.416 ******* 2026-02-02 01:09:25.697705 | orchestrator | ok: [testbed-manager] 2026-02-02 01:09:25.697714 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:09:25.697723 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:09:25.697731 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:09:25.697739 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:09:25.697747 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:09:25.697755 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:09:25.697763 | orchestrator | 2026-02-02 01:09:25.697771 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:09:25.697779 | orchestrator | Monday 02 February 2026 01:06:21 +0000 (0:00:00.745) 0:00:01.161 ******* 2026-02-02 01:09:25.697788 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-02 01:09:25.697797 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-02 01:09:25.697805 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-02 01:09:25.697813 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-02 01:09:25.697821 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-02 01:09:25.697829 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-02 01:09:25.697837 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-02 01:09:25.697867 | orchestrator | 2026-02-02 01:09:25.698004 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-02 01:09:25.698066 | orchestrator | 2026-02-02 01:09:25.698083 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-02 01:09:25.698096 | orchestrator | Monday 02 February 2026 01:06:21 +0000 (0:00:00.740) 0:00:01.902 ******* 2026-02-02 01:09:25.698675 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 01:09:25.698706 | orchestrator | 2026-02-02 01:09:25.698715 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-02 01:09:25.698723 | orchestrator | Monday 02 February 2026 01:06:23 +0000 (0:00:01.322) 0:00:03.224 ******* 2026-02-02 01:09:25.698737 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-02 01:09:25.698763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.698774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.698796 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.698888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.698913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.698923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.698932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.698941 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.698956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.698964 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.698982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.698991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.699008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.699017 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.699026 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:09:25.699039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.699048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.699063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.699079 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.699087 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.699095 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.699104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.699116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.699125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.699133 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.699176 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.699192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.699200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.699227 | orchestrator | 2026-02-02 01:09:25.699235 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-02 01:09:25.699243 | orchestrator | Monday 02 February 2026 01:06:26 +0000 (0:00:02.976) 0:00:06.201 ******* 2026-02-02 01:09:25.699252 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 01:09:25.699260 | orchestrator | 2026-02-02 01:09:25.699269 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-02 01:09:25.699279 | orchestrator | Monday 02 February 2026 01:06:27 +0000 (0:00:01.553) 0:00:07.754 ******* 2026-02-02 01:09:25.699290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.699301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.699316 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-02 01:09:25.699867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.699886 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.699895 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.699903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.699912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.699920 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.699933 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.699942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.699979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.699989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.699998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.700006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.700015 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.700034 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.700043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.700057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.700088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.700097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.700106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.700114 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.700127 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:09:25.700136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.700175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.700185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.700193 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.700201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.700210 | orchestrator | 2026-02-02 01:09:25.700218 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-02 01:09:25.700226 | orchestrator | Monday 02 February 2026 01:06:34 +0000 (0:00:06.835) 0:00:14.590 ******* 2026-02-02 01:09:25.700235 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-02 01:09:25.700249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.700265 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.700295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.700305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.700313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.700322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.700330 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.700339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.700356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.700365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.700395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.700405 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:09:25.700414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.700423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.700435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.700450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.700459 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:09:25.700470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.700565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.700585 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.700599 | orchestrator | skipping: [testbed-manager] 2026-02-02 01:09:25.700614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.700628 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:09:25.700640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.700650 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:09:25.700659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.700681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.700692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.700702 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:09:25.700738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.700749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.700759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.700769 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:09:25.700778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.700787 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:09:25.700797 | orchestrator | 2026-02-02 01:09:25.700807 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-02 01:09:25.700823 | orchestrator | Monday 02 February 2026 01:06:37 +0000 (0:00:03.302) 0:00:17.892 ******* 2026-02-02 01:09:25.700831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.700843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.700852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.700883 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-02 01:09:25.700894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.700903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.700911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.700925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.700941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.700949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.700979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.700988 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.700997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.701005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.701019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.701027 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:09:25.701035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.701048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.701056 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:09:25.701065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.701094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.701103 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:09:25.701111 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.701120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.701133 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:09:25.701141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.701150 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:09:25.701163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.701171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.701202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.701212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.701225 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:09:25.701233 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.701242 | orchestrator | skipping: [testbed-manager] 2026-02-02 01:09:25.701250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.701258 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:09:25.701266 | orchestrator | 2026-02-02 01:09:25.701274 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-02 01:09:25.701282 | orchestrator | Monday 02 February 2026 01:06:41 +0000 (0:00:03.492) 0:00:21.385 ******* 2026-02-02 01:09:25.701291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.701303 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-02 01:09:25.701335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.701344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.701358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.701366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.701375 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.701383 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.701395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.701403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.701416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.701425 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.701438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.701446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.701454 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.701463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.701474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.701595 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.701637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.701654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.701663 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.701672 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:09:25.701682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.701696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.701704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.701718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.701734 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.701742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.701763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.701772 | orchestrator | 2026-02-02 01:09:25.701781 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-02 01:09:25.701799 | orchestrator | Monday 02 February 2026 01:06:48 +0000 (0:00:06.738) 0:00:28.123 ******* 2026-02-02 01:09:25.701808 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 01:09:25.701817 | orchestrator | 2026-02-02 01:09:25.701825 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-02 01:09:25.701833 | orchestrator | Monday 02 February 2026 01:06:49 +0000 (0:00:01.461) 0:00:29.584 ******* 2026-02-02 01:09:25.701841 | orchestrator | skipping: [testbed-manager] 2026-02-02 01:09:25.701849 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:09:25.701857 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:09:25.701865 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:09:25.701873 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:09:25.701881 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:09:25.701889 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:09:25.701897 | orchestrator | 2026-02-02 01:09:25.701904 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-02 01:09:25.701913 | orchestrator | Monday 02 February 2026 01:06:50 +0000 (0:00:00.713) 0:00:30.298 ******* 2026-02-02 01:09:25.701921 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 01:09:25.701928 | orchestrator | 2026-02-02 01:09:25.701936 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-02 01:09:25.701944 | orchestrator | Monday 02 February 2026 01:06:51 +0000 (0:00:00.980) 0:00:31.279 ******* 2026-02-02 01:09:25.701957 | orchestrator | [WARNING]: Skipped 2026-02-02 01:09:25.701965 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 01:09:25.701973 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-02 01:09:25.701981 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 01:09:25.701989 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-02 01:09:25.702006 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 01:09:25.702045 | orchestrator | [WARNING]: Skipped 2026-02-02 01:09:25.702054 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 01:09:25.702062 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-02 01:09:25.702070 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 01:09:25.702078 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-02 01:09:25.702086 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 01:09:25.702094 | orchestrator | [WARNING]: Skipped 2026-02-02 01:09:25.702102 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 01:09:25.702110 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-02 01:09:25.702117 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 01:09:25.702125 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-02 01:09:25.702134 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-02 01:09:25.702142 | orchestrator | [WARNING]: Skipped 2026-02-02 01:09:25.702154 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 01:09:25.702163 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-02 01:09:25.702171 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 01:09:25.702179 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-02 01:09:25.702187 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-02 01:09:25.702195 | orchestrator | [WARNING]: Skipped 2026-02-02 01:09:25.702202 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 01:09:25.702210 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-02 01:09:25.702218 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 01:09:25.702226 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-02 01:09:25.702234 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 01:09:25.702242 | orchestrator | [WARNING]: Skipped 2026-02-02 01:09:25.702250 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 01:09:25.702258 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-02 01:09:25.702266 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 01:09:25.702274 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-02 01:09:25.702282 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 01:09:25.702289 | orchestrator | [WARNING]: Skipped 2026-02-02 01:09:25.702297 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 01:09:25.702305 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-02 01:09:25.702313 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 01:09:25.702321 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-02 01:09:25.702329 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 01:09:25.702337 | orchestrator | 2026-02-02 01:09:25.702345 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-02 01:09:25.702353 | orchestrator | Monday 02 February 2026 01:06:53 +0000 (0:00:02.395) 0:00:33.675 ******* 2026-02-02 01:09:25.702361 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-02 01:09:25.702369 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:09:25.702377 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-02 01:09:25.702385 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:09:25.702393 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-02 01:09:25.702407 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:09:25.702415 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-02 01:09:25.702423 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:09:25.702431 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-02 01:09:25.702439 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:09:25.702447 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-02 01:09:25.702455 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:09:25.702463 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-02 01:09:25.702471 | orchestrator | 2026-02-02 01:09:25.702479 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-02 01:09:25.702513 | orchestrator | Monday 02 February 2026 01:07:09 +0000 (0:00:16.158) 0:00:49.833 ******* 2026-02-02 01:09:25.702522 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-02 01:09:25.702530 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:09:25.702538 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-02 01:09:25.702551 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:09:25.702559 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-02 01:09:25.702567 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:09:25.702575 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-02 01:09:25.702583 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:09:25.702590 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-02 01:09:25.702598 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:09:25.702606 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-02 01:09:25.702614 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:09:25.702622 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-02 01:09:25.702630 | orchestrator | 2026-02-02 01:09:25.702639 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-02 01:09:25.702647 | orchestrator | Monday 02 February 2026 01:07:14 +0000 (0:00:04.865) 0:00:54.699 ******* 2026-02-02 01:09:25.702655 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-02 01:09:25.702668 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-02 01:09:25.702676 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-02 01:09:25.702684 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-02 01:09:25.702692 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:09:25.702700 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:09:25.702708 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:09:25.702716 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-02 01:09:25.702724 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:09:25.702732 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-02 01:09:25.702740 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:09:25.702748 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-02 01:09:25.702762 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:09:25.702770 | orchestrator | 2026-02-02 01:09:25.702778 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-02 01:09:25.702787 | orchestrator | Monday 02 February 2026 01:07:17 +0000 (0:00:03.156) 0:00:57.856 ******* 2026-02-02 01:09:25.702795 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 01:09:25.702802 | orchestrator | 2026-02-02 01:09:25.702810 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-02 01:09:25.702818 | orchestrator | Monday 02 February 2026 01:07:19 +0000 (0:00:01.424) 0:00:59.280 ******* 2026-02-02 01:09:25.702826 | orchestrator | skipping: [testbed-manager] 2026-02-02 01:09:25.702834 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:09:25.702842 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:09:25.702850 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:09:25.702858 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:09:25.702866 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:09:25.702874 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:09:25.702882 | orchestrator | 2026-02-02 01:09:25.702891 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-02 01:09:25.702899 | orchestrator | Monday 02 February 2026 01:07:20 +0000 (0:00:01.259) 0:01:00.540 ******* 2026-02-02 01:09:25.702907 | orchestrator | skipping: [testbed-manager] 2026-02-02 01:09:25.702914 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:09:25.702923 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:09:25.702930 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:09:25.702938 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:09:25.702946 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:09:25.702955 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:09:25.702963 | orchestrator | 2026-02-02 01:09:25.702971 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-02 01:09:25.702979 | orchestrator | Monday 02 February 2026 01:07:23 +0000 (0:00:02.806) 0:01:03.346 ******* 2026-02-02 01:09:25.702987 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-02 01:09:25.702995 | orchestrator | skipping: [testbed-manager] 2026-02-02 01:09:25.703003 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-02 01:09:25.703011 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:09:25.703019 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-02 01:09:25.703027 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:09:25.703035 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-02 01:09:25.703043 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:09:25.703051 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-02 01:09:25.703059 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:09:25.703070 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-02 01:09:25.703079 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:09:25.703087 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-02 01:09:25.703095 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:09:25.703103 | orchestrator | 2026-02-02 01:09:25.703110 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-02 01:09:25.703119 | orchestrator | Monday 02 February 2026 01:07:25 +0000 (0:00:02.457) 0:01:05.803 ******* 2026-02-02 01:09:25.703126 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-02 01:09:25.703134 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:09:25.703143 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-02 01:09:25.703156 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:09:25.703164 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-02 01:09:25.703172 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:09:25.703180 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-02 01:09:25.703188 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:09:25.703196 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-02 01:09:25.703204 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:09:25.703216 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-02 01:09:25.703225 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:09:25.703233 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-02 01:09:25.703240 | orchestrator | 2026-02-02 01:09:25.703248 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-02 01:09:25.703256 | orchestrator | Monday 02 February 2026 01:07:27 +0000 (0:00:02.202) 0:01:08.006 ******* 2026-02-02 01:09:25.703264 | orchestrator | [WARNING]: Skipped 2026-02-02 01:09:25.703272 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-02 01:09:25.703280 | orchestrator | due to this access issue: 2026-02-02 01:09:25.703288 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-02 01:09:25.703296 | orchestrator | not a directory 2026-02-02 01:09:25.703304 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 01:09:25.703312 | orchestrator | 2026-02-02 01:09:25.703320 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-02 01:09:25.703327 | orchestrator | Monday 02 February 2026 01:07:29 +0000 (0:00:01.144) 0:01:09.151 ******* 2026-02-02 01:09:25.703336 | orchestrator | skipping: [testbed-manager] 2026-02-02 01:09:25.703343 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:09:25.703352 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:09:25.703360 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:09:25.703368 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:09:25.703376 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:09:25.703384 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:09:25.703392 | orchestrator | 2026-02-02 01:09:25.703400 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-02 01:09:25.703408 | orchestrator | Monday 02 February 2026 01:07:29 +0000 (0:00:00.784) 0:01:09.935 ******* 2026-02-02 01:09:25.703416 | orchestrator | skipping: [testbed-manager] 2026-02-02 01:09:25.703424 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:09:25.703432 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:09:25.703440 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:09:25.703447 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:09:25.703455 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:09:25.703463 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:09:25.703471 | orchestrator | 2026-02-02 01:09:25.703479 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-02-02 01:09:25.703516 | orchestrator | Monday 02 February 2026 01:07:30 +0000 (0:00:00.692) 0:01:10.628 ******* 2026-02-02 01:09:25.703526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.703540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.703553 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-02 01:09:25.703567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.703576 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.703585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.703593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.703601 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.703615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.703627 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 01:09:25.703636 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.703648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.703657 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.703665 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.703673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.703687 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.703695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.703707 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.703716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.703729 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.703738 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.703746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.703755 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:09:25.703772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.703781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.703789 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.703802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 01:09:25.703811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.703819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 01:09:25.703833 | orchestrator | 2026-02-02 01:09:25.703848 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-02-02 01:09:25.703861 | orchestrator | Monday 02 February 2026 01:07:35 +0000 (0:00:04.790) 0:01:15.418 ******* 2026-02-02 01:09:25.703875 | orchestrator | changed: [testbed-manager] => { 2026-02-02 01:09:25.703889 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:09:25.703902 | orchestrator | } 2026-02-02 01:09:25.703910 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:09:25.703918 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:09:25.703926 | orchestrator | } 2026-02-02 01:09:25.703933 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:09:25.703941 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:09:25.703949 | orchestrator | } 2026-02-02 01:09:25.703957 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:09:25.703965 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:09:25.703973 | orchestrator | } 2026-02-02 01:09:25.703981 | orchestrator | changed: [testbed-node-3] => { 2026-02-02 01:09:25.703989 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:09:25.703997 | orchestrator | } 2026-02-02 01:09:25.704005 | orchestrator | changed: [testbed-node-4] => { 2026-02-02 01:09:25.704012 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:09:25.704020 | orchestrator | } 2026-02-02 01:09:25.704028 | orchestrator | changed: [testbed-node-5] => { 2026-02-02 01:09:25.704036 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:09:25.704043 | orchestrator | } 2026-02-02 01:09:25.704051 | orchestrator | 2026-02-02 01:09:25.704059 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:09:25.704067 | orchestrator | Monday 02 February 2026 01:07:36 +0000 (0:00:01.004) 0:01:16.423 ******* 2026-02-02 01:09:25.704082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.704091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.704100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.704113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.704122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.704137 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-02 01:09:25.704146 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.704158 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.704173 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:09:25.704182 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.704196 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:09:25.704204 | orchestrator | skipping: [testbed-manager] 2026-02-02 01:09:25.704212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.704221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.704229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.704237 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:09:25.704245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.704257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.704266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.704278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.704291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.704300 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:09:25.704308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.704317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.704325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.704333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.704345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 01:09:25.704353 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:09:25.704362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.704380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.704389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.704397 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:09:25.704405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 01:09:25.704413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.704421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 01:09:25.704429 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:09:25.704438 | orchestrator | 2026-02-02 01:09:25.704446 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-02 01:09:25.704453 | orchestrator | Monday 02 February 2026 01:07:38 +0000 (0:00:02.240) 0:01:18.663 ******* 2026-02-02 01:09:25.704462 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-02 01:09:25.704469 | orchestrator | skipping: [testbed-manager] 2026-02-02 01:09:25.704477 | orchestrator | 2026-02-02 01:09:25.704506 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-02 01:09:25.704520 | orchestrator | Monday 02 February 2026 01:07:39 +0000 (0:00:01.301) 0:01:19.965 ******* 2026-02-02 01:09:25.704528 | orchestrator | 2026-02-02 01:09:25.704536 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-02 01:09:25.704544 | orchestrator | Monday 02 February 2026 01:07:40 +0000 (0:00:00.096) 0:01:20.061 ******* 2026-02-02 01:09:25.704552 | orchestrator | 2026-02-02 01:09:25.704560 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-02 01:09:25.704568 | orchestrator | Monday 02 February 2026 01:07:40 +0000 (0:00:00.082) 0:01:20.144 ******* 2026-02-02 01:09:25.704581 | orchestrator | 2026-02-02 01:09:25.704589 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-02 01:09:25.704596 | orchestrator | Monday 02 February 2026 01:07:40 +0000 (0:00:00.079) 0:01:20.223 ******* 2026-02-02 01:09:25.704604 | orchestrator | 2026-02-02 01:09:25.704612 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-02 01:09:25.704620 | orchestrator | Monday 02 February 2026 01:07:40 +0000 (0:00:00.067) 0:01:20.291 ******* 2026-02-02 01:09:25.704628 | orchestrator | 2026-02-02 01:09:25.704636 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-02 01:09:25.704643 | orchestrator | Monday 02 February 2026 01:07:40 +0000 (0:00:00.067) 0:01:20.359 ******* 2026-02-02 01:09:25.704651 | orchestrator | 2026-02-02 01:09:25.704659 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-02 01:09:25.704667 | orchestrator | Monday 02 February 2026 01:07:40 +0000 (0:00:00.349) 0:01:20.708 ******* 2026-02-02 01:09:25.704675 | orchestrator | 2026-02-02 01:09:25.704683 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-02 01:09:25.704696 | orchestrator | Monday 02 February 2026 01:07:40 +0000 (0:00:00.099) 0:01:20.808 ******* 2026-02-02 01:09:25.704704 | orchestrator | changed: [testbed-manager] 2026-02-02 01:09:25.704712 | orchestrator | 2026-02-02 01:09:25.704720 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-02 01:09:25.704728 | orchestrator | Monday 02 February 2026 01:07:59 +0000 (0:00:18.631) 0:01:39.439 ******* 2026-02-02 01:09:25.704736 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:09:25.704744 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:09:25.704752 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:09:25.704760 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:09:25.704768 | orchestrator | changed: [testbed-manager] 2026-02-02 01:09:25.704776 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:09:25.704783 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:09:25.704791 | orchestrator | 2026-02-02 01:09:25.704799 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-02 01:09:25.704807 | orchestrator | Monday 02 February 2026 01:08:15 +0000 (0:00:15.926) 0:01:55.366 ******* 2026-02-02 01:09:25.704815 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:09:25.704823 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:09:25.704830 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:09:25.704838 | orchestrator | 2026-02-02 01:09:25.704846 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-02 01:09:25.704854 | orchestrator | Monday 02 February 2026 01:08:21 +0000 (0:00:06.449) 0:02:01.815 ******* 2026-02-02 01:09:25.704862 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:09:25.704870 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:09:25.704878 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:09:25.704885 | orchestrator | 2026-02-02 01:09:25.704893 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-02 01:09:25.704901 | orchestrator | Monday 02 February 2026 01:08:27 +0000 (0:00:05.792) 0:02:07.608 ******* 2026-02-02 01:09:25.704909 | orchestrator | changed: [testbed-manager] 2026-02-02 01:09:25.704917 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:09:25.704925 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:09:25.704933 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:09:25.704940 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:09:25.704948 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:09:25.704956 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:09:25.704964 | orchestrator | 2026-02-02 01:09:25.704972 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-02 01:09:25.704980 | orchestrator | Monday 02 February 2026 01:08:45 +0000 (0:00:18.013) 0:02:25.622 ******* 2026-02-02 01:09:25.704988 | orchestrator | changed: [testbed-manager] 2026-02-02 01:09:25.704996 | orchestrator | 2026-02-02 01:09:25.705004 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-02 01:09:25.705017 | orchestrator | Monday 02 February 2026 01:08:53 +0000 (0:00:07.972) 0:02:33.594 ******* 2026-02-02 01:09:25.705025 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:09:25.705033 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:09:25.705040 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:09:25.705048 | orchestrator | 2026-02-02 01:09:25.705056 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-02 01:09:25.705064 | orchestrator | Monday 02 February 2026 01:09:05 +0000 (0:00:11.801) 0:02:45.396 ******* 2026-02-02 01:09:25.705072 | orchestrator | changed: [testbed-manager] 2026-02-02 01:09:25.705080 | orchestrator | 2026-02-02 01:09:25.705088 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-02 01:09:25.705096 | orchestrator | Monday 02 February 2026 01:09:12 +0000 (0:00:06.757) 0:02:52.154 ******* 2026-02-02 01:09:25.705104 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:09:25.705112 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:09:25.705120 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:09:25.705128 | orchestrator | 2026-02-02 01:09:25.705136 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:09:25.705144 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-02-02 01:09:25.705153 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-02 01:09:25.705165 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-02 01:09:25.705174 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-02 01:09:25.705182 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-02 01:09:25.705190 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-02 01:09:25.705198 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-02 01:09:25.705206 | orchestrator | 2026-02-02 01:09:25.705214 | orchestrator | 2026-02-02 01:09:25.705223 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:09:25.705231 | orchestrator | Monday 02 February 2026 01:09:22 +0000 (0:00:10.779) 0:03:02.933 ******* 2026-02-02 01:09:25.705239 | orchestrator | =============================================================================== 2026-02-02 01:09:25.705247 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.63s 2026-02-02 01:09:25.705255 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 18.01s 2026-02-02 01:09:25.705267 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.16s 2026-02-02 01:09:25.705276 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.93s 2026-02-02 01:09:25.705283 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.80s 2026-02-02 01:09:25.705291 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.78s 2026-02-02 01:09:25.705299 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.97s 2026-02-02 01:09:25.705307 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.84s 2026-02-02 01:09:25.705315 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.76s 2026-02-02 01:09:25.705323 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.74s 2026-02-02 01:09:25.705331 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.45s 2026-02-02 01:09:25.705343 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.79s 2026-02-02 01:09:25.705351 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.87s 2026-02-02 01:09:25.705359 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 4.79s 2026-02-02 01:09:25.705367 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 3.49s 2026-02-02 01:09:25.705375 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 3.30s 2026-02-02 01:09:25.705383 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.16s 2026-02-02 01:09:25.705391 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.98s 2026-02-02 01:09:25.705399 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.81s 2026-02-02 01:09:25.705407 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.46s 2026-02-02 01:09:25.705415 | orchestrator | 2026-02-02 01:09:25 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:25.705423 | orchestrator | 2026-02-02 01:09:25 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:25.705431 | orchestrator | 2026-02-02 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:28.748928 | orchestrator | 2026-02-02 01:09:28 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:28.751736 | orchestrator | 2026-02-02 01:09:28 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:09:28.756067 | orchestrator | 2026-02-02 01:09:28 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:28.757683 | orchestrator | 2026-02-02 01:09:28 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:28.757708 | orchestrator | 2026-02-02 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:31.807572 | orchestrator | 2026-02-02 01:09:31 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:31.808518 | orchestrator | 2026-02-02 01:09:31 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:09:31.809586 | orchestrator | 2026-02-02 01:09:31 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:31.812462 | orchestrator | 2026-02-02 01:09:31 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:31.812538 | orchestrator | 2026-02-02 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:34.859362 | orchestrator | 2026-02-02 01:09:34 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:34.861049 | orchestrator | 2026-02-02 01:09:34 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:09:34.863121 | orchestrator | 2026-02-02 01:09:34 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:34.864929 | orchestrator | 2026-02-02 01:09:34 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:34.864969 | orchestrator | 2026-02-02 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:37.916024 | orchestrator | 2026-02-02 01:09:37 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:37.916658 | orchestrator | 2026-02-02 01:09:37 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:09:37.918536 | orchestrator | 2026-02-02 01:09:37 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:37.920207 | orchestrator | 2026-02-02 01:09:37 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:37.920255 | orchestrator | 2026-02-02 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:40.974299 | orchestrator | 2026-02-02 01:09:40 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:40.975441 | orchestrator | 2026-02-02 01:09:40 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:09:40.976298 | orchestrator | 2026-02-02 01:09:40 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:40.977440 | orchestrator | 2026-02-02 01:09:40 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:40.977506 | orchestrator | 2026-02-02 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:44.020663 | orchestrator | 2026-02-02 01:09:44 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:44.021683 | orchestrator | 2026-02-02 01:09:44 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:09:44.022545 | orchestrator | 2026-02-02 01:09:44 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:44.024049 | orchestrator | 2026-02-02 01:09:44 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:44.024087 | orchestrator | 2026-02-02 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:47.062613 | orchestrator | 2026-02-02 01:09:47 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:47.063444 | orchestrator | 2026-02-02 01:09:47 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:09:47.065396 | orchestrator | 2026-02-02 01:09:47 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:47.067234 | orchestrator | 2026-02-02 01:09:47 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:47.067325 | orchestrator | 2026-02-02 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:50.110920 | orchestrator | 2026-02-02 01:09:50 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:50.112341 | orchestrator | 2026-02-02 01:09:50 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:09:50.114535 | orchestrator | 2026-02-02 01:09:50 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:50.117094 | orchestrator | 2026-02-02 01:09:50 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:50.117235 | orchestrator | 2026-02-02 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:53.165563 | orchestrator | 2026-02-02 01:09:53 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:53.165686 | orchestrator | 2026-02-02 01:09:53 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:09:53.170404 | orchestrator | 2026-02-02 01:09:53 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:53.174764 | orchestrator | 2026-02-02 01:09:53 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:53.174823 | orchestrator | 2026-02-02 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:56.231895 | orchestrator | 2026-02-02 01:09:56 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:56.233292 | orchestrator | 2026-02-02 01:09:56 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:09:56.234828 | orchestrator | 2026-02-02 01:09:56 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:56.235985 | orchestrator | 2026-02-02 01:09:56 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:56.236035 | orchestrator | 2026-02-02 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:09:59.281993 | orchestrator | 2026-02-02 01:09:59 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:09:59.284645 | orchestrator | 2026-02-02 01:09:59 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:09:59.287092 | orchestrator | 2026-02-02 01:09:59 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:09:59.288731 | orchestrator | 2026-02-02 01:09:59 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:09:59.288765 | orchestrator | 2026-02-02 01:09:59 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:02.334693 | orchestrator | 2026-02-02 01:10:02 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:02.340305 | orchestrator | 2026-02-02 01:10:02 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:02.341648 | orchestrator | 2026-02-02 01:10:02 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:10:02.343892 | orchestrator | 2026-02-02 01:10:02 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:02.343917 | orchestrator | 2026-02-02 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:05.381527 | orchestrator | 2026-02-02 01:10:05 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:05.382396 | orchestrator | 2026-02-02 01:10:05 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:05.384266 | orchestrator | 2026-02-02 01:10:05 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:10:05.386291 | orchestrator | 2026-02-02 01:10:05 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:05.386350 | orchestrator | 2026-02-02 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:08.432318 | orchestrator | 2026-02-02 01:10:08 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:08.435726 | orchestrator | 2026-02-02 01:10:08 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:08.438219 | orchestrator | 2026-02-02 01:10:08 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:10:08.440059 | orchestrator | 2026-02-02 01:10:08 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:08.440239 | orchestrator | 2026-02-02 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:11.491849 | orchestrator | 2026-02-02 01:10:11 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:11.495480 | orchestrator | 2026-02-02 01:10:11 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:11.498941 | orchestrator | 2026-02-02 01:10:11 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:10:11.502523 | orchestrator | 2026-02-02 01:10:11 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:11.502571 | orchestrator | 2026-02-02 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:14.575589 | orchestrator | 2026-02-02 01:10:14 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:14.576213 | orchestrator | 2026-02-02 01:10:14 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:14.577658 | orchestrator | 2026-02-02 01:10:14 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:10:14.579346 | orchestrator | 2026-02-02 01:10:14 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:14.579376 | orchestrator | 2026-02-02 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:17.618178 | orchestrator | 2026-02-02 01:10:17 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:17.618636 | orchestrator | 2026-02-02 01:10:17 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:17.620755 | orchestrator | 2026-02-02 01:10:17 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:10:17.622346 | orchestrator | 2026-02-02 01:10:17 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:17.622401 | orchestrator | 2026-02-02 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:20.663505 | orchestrator | 2026-02-02 01:10:20 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:20.663838 | orchestrator | 2026-02-02 01:10:20 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:20.664824 | orchestrator | 2026-02-02 01:10:20 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:10:20.665725 | orchestrator | 2026-02-02 01:10:20 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:20.665767 | orchestrator | 2026-02-02 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:23.691953 | orchestrator | 2026-02-02 01:10:23 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:23.692059 | orchestrator | 2026-02-02 01:10:23 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:23.693010 | orchestrator | 2026-02-02 01:10:23 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:10:23.693150 | orchestrator | 2026-02-02 01:10:23 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:23.693176 | orchestrator | 2026-02-02 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:26.728121 | orchestrator | 2026-02-02 01:10:26 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:26.728837 | orchestrator | 2026-02-02 01:10:26 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:26.729299 | orchestrator | 2026-02-02 01:10:26 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:10:26.729879 | orchestrator | 2026-02-02 01:10:26 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:26.729998 | orchestrator | 2026-02-02 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:29.765658 | orchestrator | 2026-02-02 01:10:29 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:29.766317 | orchestrator | 2026-02-02 01:10:29 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:29.769065 | orchestrator | 2026-02-02 01:10:29 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:10:29.771060 | orchestrator | 2026-02-02 01:10:29 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:29.771110 | orchestrator | 2026-02-02 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:32.811179 | orchestrator | 2026-02-02 01:10:32 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:32.812660 | orchestrator | 2026-02-02 01:10:32 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:32.814287 | orchestrator | 2026-02-02 01:10:32 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:10:32.816322 | orchestrator | 2026-02-02 01:10:32 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:32.816504 | orchestrator | 2026-02-02 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:35.860198 | orchestrator | 2026-02-02 01:10:35 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:35.861733 | orchestrator | 2026-02-02 01:10:35 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:35.863185 | orchestrator | 2026-02-02 01:10:35 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:10:35.864667 | orchestrator | 2026-02-02 01:10:35 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:35.864726 | orchestrator | 2026-02-02 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:38.910317 | orchestrator | 2026-02-02 01:10:38 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:38.911482 | orchestrator | 2026-02-02 01:10:38 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:38.912876 | orchestrator | 2026-02-02 01:10:38 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:10:38.914624 | orchestrator | 2026-02-02 01:10:38 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:38.914661 | orchestrator | 2026-02-02 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:41.963895 | orchestrator | 2026-02-02 01:10:41 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:41.965843 | orchestrator | 2026-02-02 01:10:41 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:41.968655 | orchestrator | 2026-02-02 01:10:41 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state STARTED 2026-02-02 01:10:41.969561 | orchestrator | 2026-02-02 01:10:41 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:41.969837 | orchestrator | 2026-02-02 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:45.014555 | orchestrator | 2026-02-02 01:10:45 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:45.014933 | orchestrator | 2026-02-02 01:10:45 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:45.016132 | orchestrator | 2026-02-02 01:10:45 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:10:45.018612 | orchestrator | 2026-02-02 01:10:45.018672 | orchestrator | 2026-02-02 01:10:45 | INFO  | Task 4a92ef59-5f18-4625-8023-1cf403c894ee is in state SUCCESS 2026-02-02 01:10:45.020228 | orchestrator | 2026-02-02 01:10:45.020269 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:10:45.020281 | orchestrator | 2026-02-02 01:10:45.020295 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:10:45.020309 | orchestrator | Monday 02 February 2026 01:07:25 +0000 (0:00:00.364) 0:00:00.364 ******* 2026-02-02 01:10:45.020343 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:10:45.020352 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:10:45.020360 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:10:45.020367 | orchestrator | 2026-02-02 01:10:45.020375 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:10:45.020414 | orchestrator | Monday 02 February 2026 01:07:25 +0000 (0:00:00.404) 0:00:00.768 ******* 2026-02-02 01:10:45.020423 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-02 01:10:45.020432 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-02 01:10:45.020440 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-02 01:10:45.020447 | orchestrator | 2026-02-02 01:10:45.020455 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-02 01:10:45.020462 | orchestrator | 2026-02-02 01:10:45.020470 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-02 01:10:45.020477 | orchestrator | Monday 02 February 2026 01:07:26 +0000 (0:00:00.870) 0:00:01.639 ******* 2026-02-02 01:10:45.020484 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:10:45.020493 | orchestrator | 2026-02-02 01:10:45.020500 | orchestrator | TASK [service-ks-register : glance | Creating/deleting services] *************** 2026-02-02 01:10:45.020508 | orchestrator | Monday 02 February 2026 01:07:27 +0000 (0:00:01.000) 0:00:02.639 ******* 2026-02-02 01:10:45.020515 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-02 01:10:45.020523 | orchestrator | 2026-02-02 01:10:45.020530 | orchestrator | TASK [service-ks-register : glance | Creating/deleting endpoints] ************** 2026-02-02 01:10:45.020537 | orchestrator | Monday 02 February 2026 01:07:30 +0000 (0:00:03.156) 0:00:05.796 ******* 2026-02-02 01:10:45.020545 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-02 01:10:45.020552 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-02 01:10:45.020560 | orchestrator | 2026-02-02 01:10:45.020567 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-02 01:10:45.020574 | orchestrator | Monday 02 February 2026 01:07:38 +0000 (0:00:07.283) 0:00:13.079 ******* 2026-02-02 01:10:45.020582 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 01:10:45.020590 | orchestrator | 2026-02-02 01:10:45.020597 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-02 01:10:45.020624 | orchestrator | Monday 02 February 2026 01:07:41 +0000 (0:00:03.623) 0:00:16.703 ******* 2026-02-02 01:10:45.020633 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-02 01:10:45.020641 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 01:10:45.020648 | orchestrator | 2026-02-02 01:10:45.020656 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-02 01:10:45.020663 | orchestrator | Monday 02 February 2026 01:07:45 +0000 (0:00:04.126) 0:00:20.830 ******* 2026-02-02 01:10:45.020670 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 01:10:45.020678 | orchestrator | 2026-02-02 01:10:45.020685 | orchestrator | TASK [service-ks-register : glance | Granting/revoking user roles] ************* 2026-02-02 01:10:45.020692 | orchestrator | Monday 02 February 2026 01:07:49 +0000 (0:00:03.519) 0:00:24.350 ******* 2026-02-02 01:10:45.020700 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-02 01:10:45.020707 | orchestrator | 2026-02-02 01:10:45.020714 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-02 01:10:45.020734 | orchestrator | Monday 02 February 2026 01:07:53 +0000 (0:00:04.003) 0:00:28.354 ******* 2026-02-02 01:10:45.020762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 01:10:45.020786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 01:10:45.020819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 01:10:45.020836 | orchestrator | 2026-02-02 01:10:45.020845 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-02 01:10:45.020854 | orchestrator | Monday 02 February 2026 01:07:57 +0000 (0:00:03.868) 0:00:32.222 ******* 2026-02-02 01:10:45.020868 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:10:45.020877 | orchestrator | 2026-02-02 01:10:45.020886 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-02 01:10:45.020895 | orchestrator | Monday 02 February 2026 01:07:58 +0000 (0:00:00.898) 0:00:33.121 ******* 2026-02-02 01:10:45.020904 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:10:45.020913 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:10:45.020922 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:10:45.020930 | orchestrator | 2026-02-02 01:10:45.020940 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-02 01:10:45.020949 | orchestrator | Monday 02 February 2026 01:08:07 +0000 (0:00:09.646) 0:00:42.767 ******* 2026-02-02 01:10:45.020958 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-02-02 01:10:45.020969 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-02-02 01:10:45.020978 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-02-02 01:10:45.020986 | orchestrator | 2026-02-02 01:10:45.020995 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-02 01:10:45.021004 | orchestrator | Monday 02 February 2026 01:08:09 +0000 (0:00:01.767) 0:00:44.534 ******* 2026-02-02 01:10:45.021013 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-02-02 01:10:45.021022 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-02-02 01:10:45.021030 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-02-02 01:10:45.021039 | orchestrator | 2026-02-02 01:10:45.021047 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-02 01:10:45.021056 | orchestrator | Monday 02 February 2026 01:08:10 +0000 (0:00:01.210) 0:00:45.745 ******* 2026-02-02 01:10:45.021065 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:10:45.021073 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:10:45.021081 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:10:45.021090 | orchestrator | 2026-02-02 01:10:45.021098 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-02 01:10:45.021107 | orchestrator | Monday 02 February 2026 01:08:11 +0000 (0:00:00.943) 0:00:46.688 ******* 2026-02-02 01:10:45.021116 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:10:45.021130 | orchestrator | 2026-02-02 01:10:45.021139 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-02 01:10:45.021148 | orchestrator | Monday 02 February 2026 01:08:12 +0000 (0:00:00.184) 0:00:46.873 ******* 2026-02-02 01:10:45.021157 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:10:45.021165 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:10:45.021173 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:10:45.021180 | orchestrator | 2026-02-02 01:10:45.021188 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-02 01:10:45.021195 | orchestrator | Monday 02 February 2026 01:08:12 +0000 (0:00:00.341) 0:00:47.214 ******* 2026-02-02 01:10:45.021202 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:10:45.021210 | orchestrator | 2026-02-02 01:10:45.021217 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-02 01:10:45.021229 | orchestrator | Monday 02 February 2026 01:08:12 +0000 (0:00:00.599) 0:00:47.814 ******* 2026-02-02 01:10:45.021243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 01:10:45.021253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 01:10:45.021271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 01:10:45.021280 | orchestrator | 2026-02-02 01:10:45.021287 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-02 01:10:45.021295 | orchestrator | Monday 02 February 2026 01:08:18 +0000 (0:00:05.425) 0:00:53.239 ******* 2026-02-02 01:10:45.021309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 01:10:45.021323 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:10:45.021335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 01:10:45.021343 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:10:45.021357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 01:10:45.021366 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:10:45.021373 | orchestrator | 2026-02-02 01:10:45.021439 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-02 01:10:45.021456 | orchestrator | Monday 02 February 2026 01:08:21 +0000 (0:00:03.383) 0:00:56.622 ******* 2026-02-02 01:10:45.021469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 01:10:45.021478 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:10:45.021492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 01:10:45.021501 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:10:45.021509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 01:10:45.021522 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:10:45.021530 | orchestrator | 2026-02-02 01:10:45.021537 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-02 01:10:45.021544 | orchestrator | Monday 02 February 2026 01:08:26 +0000 (0:00:04.907) 0:01:01.529 ******* 2026-02-02 01:10:45.021560 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:10:45.021568 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:10:45.021575 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:10:45.021583 | orchestrator | 2026-02-02 01:10:45.021590 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-02 01:10:45.021598 | orchestrator | Monday 02 February 2026 01:08:37 +0000 (0:00:10.645) 0:01:12.175 ******* 2026-02-02 01:10:45.021610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 01:10:45.021620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 01:10:45.021638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 01:10:45.021647 | orchestrator | 2026-02-02 01:10:45.021659 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-02 01:10:45.021667 | orchestrator | Monday 02 February 2026 01:08:42 +0000 (0:00:05.277) 0:01:17.453 ******* 2026-02-02 01:10:45.021674 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:10:45.021682 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:10:45.021689 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:10:45.021697 | orchestrator | 2026-02-02 01:10:45.021704 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-02 01:10:45.021711 | orchestrator | Monday 02 February 2026 01:08:48 +0000 (0:00:06.210) 0:01:23.663 ******* 2026-02-02 01:10:45.021724 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:10:45.021731 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:10:45.021739 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:10:45.021746 | orchestrator | 2026-02-02 01:10:45.021754 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-02 01:10:45.021761 | orchestrator | Monday 02 February 2026 01:08:55 +0000 (0:00:06.959) 0:01:30.623 ******* 2026-02-02 01:10:45.021768 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:10:45.021776 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:10:45.021783 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:10:45.021791 | orchestrator | 2026-02-02 01:10:45.021798 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-02 01:10:45.021805 | orchestrator | Monday 02 February 2026 01:08:59 +0000 (0:00:04.046) 0:01:34.670 ******* 2026-02-02 01:10:45.021813 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:10:45.021820 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:10:45.021828 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:10:45.021835 | orchestrator | 2026-02-02 01:10:45.021842 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-02 01:10:45.021850 | orchestrator | Monday 02 February 2026 01:09:04 +0000 (0:00:04.523) 0:01:39.193 ******* 2026-02-02 01:10:45.021857 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:10:45.021865 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:10:45.021872 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:10:45.021879 | orchestrator | 2026-02-02 01:10:45.021886 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-02 01:10:45.021896 | orchestrator | Monday 02 February 2026 01:09:04 +0000 (0:00:00.486) 0:01:39.679 ******* 2026-02-02 01:10:45.021908 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-02 01:10:45.021921 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:10:45.021933 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-02 01:10:45.021944 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:10:45.021951 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-02 01:10:45.021959 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:10:45.021967 | orchestrator | 2026-02-02 01:10:45.021979 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-02 01:10:45.021992 | orchestrator | Monday 02 February 2026 01:09:09 +0000 (0:00:04.815) 0:01:44.495 ******* 2026-02-02 01:10:45.022004 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:10:45.022065 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:10:45.022074 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:10:45.022081 | orchestrator | 2026-02-02 01:10:45.022090 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-02-02 01:10:45.022103 | orchestrator | Monday 02 February 2026 01:09:16 +0000 (0:00:07.199) 0:01:51.694 ******* 2026-02-02 01:10:45.022129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 01:10:45.022147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 01:10:45.022176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 01:10:45.022191 | orchestrator | 2026-02-02 01:10:45.022199 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-02-02 01:10:45.022207 | orchestrator | Monday 02 February 2026 01:09:21 +0000 (0:00:04.388) 0:01:56.083 ******* 2026-02-02 01:10:45.022214 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:10:45.022222 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:10:45.022230 | orchestrator | } 2026-02-02 01:10:45.022237 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:10:45.022245 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:10:45.022252 | orchestrator | } 2026-02-02 01:10:45.022260 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:10:45.022267 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:10:45.022275 | orchestrator | } 2026-02-02 01:10:45.022282 | orchestrator | 2026-02-02 01:10:45.022289 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:10:45.022301 | orchestrator | Monday 02 February 2026 01:09:21 +0000 (0:00:00.347) 0:01:56.430 ******* 2026-02-02 01:10:45.022310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 01:10:45.022319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 01:10:45.022333 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:10:45.022340 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:10:45.022354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 01:10:45.022363 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:10:45.022371 | orchestrator | 2026-02-02 01:10:45.022424 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-02 01:10:45.022433 | orchestrator | Monday 02 February 2026 01:09:25 +0000 (0:00:03.837) 0:02:00.268 ******* 2026-02-02 01:10:45.022441 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:10:45.022449 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:10:45.022456 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:10:45.022463 | orchestrator | 2026-02-02 01:10:45.022471 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-02 01:10:45.022478 | orchestrator | Monday 02 February 2026 01:09:26 +0000 (0:00:00.694) 0:02:00.962 ******* 2026-02-02 01:10:45.022486 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:10:45.022493 | orchestrator | 2026-02-02 01:10:45.022501 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-02 01:10:45.022508 | orchestrator | Monday 02 February 2026 01:09:28 +0000 (0:00:02.203) 0:02:03.166 ******* 2026-02-02 01:10:45.022515 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:10:45.022523 | orchestrator | 2026-02-02 01:10:45.022530 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-02 01:10:45.022538 | orchestrator | Monday 02 February 2026 01:09:30 +0000 (0:00:02.343) 0:02:05.510 ******* 2026-02-02 01:10:45.022545 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:10:45.022553 | orchestrator | 2026-02-02 01:10:45.022565 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-02 01:10:45.022573 | orchestrator | Monday 02 February 2026 01:09:33 +0000 (0:00:02.347) 0:02:07.857 ******* 2026-02-02 01:10:45.022580 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:10:45.022587 | orchestrator | 2026-02-02 01:10:45.022595 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-02 01:10:45.022602 | orchestrator | Monday 02 February 2026 01:10:05 +0000 (0:00:32.130) 0:02:39.988 ******* 2026-02-02 01:10:45.022610 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:10:45.022617 | orchestrator | 2026-02-02 01:10:45.022625 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-02 01:10:45.022632 | orchestrator | Monday 02 February 2026 01:10:07 +0000 (0:00:02.296) 0:02:42.284 ******* 2026-02-02 01:10:45.022640 | orchestrator | 2026-02-02 01:10:45.022650 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-02 01:10:45.022658 | orchestrator | Monday 02 February 2026 01:10:07 +0000 (0:00:00.063) 0:02:42.348 ******* 2026-02-02 01:10:45.022666 | orchestrator | 2026-02-02 01:10:45.022673 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-02 01:10:45.022680 | orchestrator | Monday 02 February 2026 01:10:07 +0000 (0:00:00.069) 0:02:42.417 ******* 2026-02-02 01:10:45.022688 | orchestrator | 2026-02-02 01:10:45.022695 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-02 01:10:45.022703 | orchestrator | Monday 02 February 2026 01:10:07 +0000 (0:00:00.069) 0:02:42.487 ******* 2026-02-02 01:10:45.022710 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:10:45.022718 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:10:45.022725 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:10:45.022733 | orchestrator | 2026-02-02 01:10:45.022740 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:10:45.022749 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-02 01:10:45.022757 | orchestrator | testbed-node-1 : ok=17  changed=11  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-02 01:10:45.022764 | orchestrator | testbed-node-2 : ok=17  changed=11  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-02 01:10:45.022772 | orchestrator | 2026-02-02 01:10:45.022779 | orchestrator | 2026-02-02 01:10:45.022787 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:10:45.022794 | orchestrator | Monday 02 February 2026 01:10:43 +0000 (0:00:35.900) 0:03:18.388 ******* 2026-02-02 01:10:45.022806 | orchestrator | =============================================================================== 2026-02-02 01:10:45.022814 | orchestrator | glance : Restart glance-api container ---------------------------------- 35.90s 2026-02-02 01:10:45.022822 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 32.13s 2026-02-02 01:10:45.022829 | orchestrator | glance : Creating TLS backend PEM File --------------------------------- 10.65s 2026-02-02 01:10:45.022836 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 9.65s 2026-02-02 01:10:45.022844 | orchestrator | service-ks-register : glance | Creating/deleting endpoints -------------- 7.28s 2026-02-02 01:10:45.022851 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 7.20s 2026-02-02 01:10:45.022858 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.96s 2026-02-02 01:10:45.022866 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.21s 2026-02-02 01:10:45.022873 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.43s 2026-02-02 01:10:45.022880 | orchestrator | glance : Copying over config.json files for services -------------------- 5.28s 2026-02-02 01:10:45.022888 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.91s 2026-02-02 01:10:45.022901 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.82s 2026-02-02 01:10:45.022909 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.52s 2026-02-02 01:10:45.022916 | orchestrator | service-check-containers : glance | Check containers -------------------- 4.39s 2026-02-02 01:10:45.022924 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.13s 2026-02-02 01:10:45.022931 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.05s 2026-02-02 01:10:45.022938 | orchestrator | service-ks-register : glance | Granting/revoking user roles ------------- 4.00s 2026-02-02 01:10:45.022946 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.87s 2026-02-02 01:10:45.022953 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.84s 2026-02-02 01:10:45.022960 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.62s 2026-02-02 01:10:45.022968 | orchestrator | 2026-02-02 01:10:45 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:45.022976 | orchestrator | 2026-02-02 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:48.068646 | orchestrator | 2026-02-02 01:10:48 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:48.070095 | orchestrator | 2026-02-02 01:10:48 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:48.071892 | orchestrator | 2026-02-02 01:10:48 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:10:48.073157 | orchestrator | 2026-02-02 01:10:48 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:48.073205 | orchestrator | 2026-02-02 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:51.125934 | orchestrator | 2026-02-02 01:10:51 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:51.126896 | orchestrator | 2026-02-02 01:10:51 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:51.128105 | orchestrator | 2026-02-02 01:10:51 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:10:51.129157 | orchestrator | 2026-02-02 01:10:51 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:51.129198 | orchestrator | 2026-02-02 01:10:51 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:54.165703 | orchestrator | 2026-02-02 01:10:54 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:54.167791 | orchestrator | 2026-02-02 01:10:54 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:54.169078 | orchestrator | 2026-02-02 01:10:54 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:10:54.169327 | orchestrator | 2026-02-02 01:10:54 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:54.169412 | orchestrator | 2026-02-02 01:10:54 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:10:57.218402 | orchestrator | 2026-02-02 01:10:57 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:10:57.219570 | orchestrator | 2026-02-02 01:10:57 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:10:57.220733 | orchestrator | 2026-02-02 01:10:57 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:10:57.221917 | orchestrator | 2026-02-02 01:10:57 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:10:57.221951 | orchestrator | 2026-02-02 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:00.277343 | orchestrator | 2026-02-02 01:11:00 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:00.277502 | orchestrator | 2026-02-02 01:11:00 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:00.277522 | orchestrator | 2026-02-02 01:11:00 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:00.277534 | orchestrator | 2026-02-02 01:11:00 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state STARTED 2026-02-02 01:11:00.277546 | orchestrator | 2026-02-02 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:03.320512 | orchestrator | 2026-02-02 01:11:03 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:03.321116 | orchestrator | 2026-02-02 01:11:03 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:03.325564 | orchestrator | 2026-02-02 01:11:03 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:03.329998 | orchestrator | 2026-02-02 01:11:03 | INFO  | Task 455577a8-a88c-465b-9ae3-4c38a2bf9def is in state SUCCESS 2026-02-02 01:11:03.331991 | orchestrator | 2026-02-02 01:11:03.332053 | orchestrator | 2026-02-02 01:11:03.332063 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:11:03.332073 | orchestrator | 2026-02-02 01:11:03.332151 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:11:03.332164 | orchestrator | Monday 02 February 2026 01:07:38 +0000 (0:00:00.269) 0:00:00.269 ******* 2026-02-02 01:11:03.332172 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:11:03.332180 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:11:03.332187 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:11:03.332195 | orchestrator | 2026-02-02 01:11:03.332202 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:11:03.332210 | orchestrator | Monday 02 February 2026 01:07:39 +0000 (0:00:00.378) 0:00:00.648 ******* 2026-02-02 01:11:03.332217 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-02 01:11:03.332225 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-02 01:11:03.332232 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-02 01:11:03.332241 | orchestrator | 2026-02-02 01:11:03.332253 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-02 01:11:03.332265 | orchestrator | 2026-02-02 01:11:03.332276 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-02 01:11:03.332288 | orchestrator | Monday 02 February 2026 01:07:39 +0000 (0:00:00.442) 0:00:01.090 ******* 2026-02-02 01:11:03.332299 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:11:03.332312 | orchestrator | 2026-02-02 01:11:03.332325 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting services] *************** 2026-02-02 01:11:03.332424 | orchestrator | Monday 02 February 2026 01:07:40 +0000 (0:00:00.554) 0:00:01.644 ******* 2026-02-02 01:11:03.332872 | orchestrator | changed: [testbed-node-0] => (item=cinder (block-storage)) 2026-02-02 01:11:03.332897 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-02 01:11:03.332905 | orchestrator | 2026-02-02 01:11:03.332913 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting endpoints] ************** 2026-02-02 01:11:03.332921 | orchestrator | Monday 02 February 2026 01:07:47 +0000 (0:00:06.759) 0:00:08.404 ******* 2026-02-02 01:11:03.332929 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api-int.testbed.osism.xyz:8776/v3 -> internal) 2026-02-02 01:11:03.332953 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api.testbed.osism.xyz:8776/v3 -> public) 2026-02-02 01:11:03.332960 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-02 01:11:03.332988 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-02 01:11:03.332995 | orchestrator | 2026-02-02 01:11:03.333002 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-02 01:11:03.333009 | orchestrator | Monday 02 February 2026 01:08:00 +0000 (0:00:13.510) 0:00:21.915 ******* 2026-02-02 01:11:03.333016 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 01:11:03.333023 | orchestrator | 2026-02-02 01:11:03.333030 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-02 01:11:03.333037 | orchestrator | Monday 02 February 2026 01:08:04 +0000 (0:00:04.023) 0:00:25.938 ******* 2026-02-02 01:11:03.333044 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-02 01:11:03.333122 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 01:11:03.333132 | orchestrator | 2026-02-02 01:11:03.333140 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-02 01:11:03.333147 | orchestrator | Monday 02 February 2026 01:08:08 +0000 (0:00:03.973) 0:00:29.911 ******* 2026-02-02 01:11:03.333153 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 01:11:03.333160 | orchestrator | 2026-02-02 01:11:03.333167 | orchestrator | TASK [service-ks-register : cinder | Granting/revoking user roles] ************* 2026-02-02 01:11:03.333174 | orchestrator | Monday 02 February 2026 01:08:12 +0000 (0:00:03.453) 0:00:33.365 ******* 2026-02-02 01:11:03.333181 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-02 01:11:03.333187 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-02 01:11:03.333194 | orchestrator | 2026-02-02 01:11:03.333201 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-02 01:11:03.333208 | orchestrator | Monday 02 February 2026 01:08:19 +0000 (0:00:07.721) 0:00:41.087 ******* 2026-02-02 01:11:03.334064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.334107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.334143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.334159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.334170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.334182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.334240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.334256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.334282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.334294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.334301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.334309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.334316 | orchestrator | 2026-02-02 01:11:03.334397 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-02 01:11:03.334411 | orchestrator | Monday 02 February 2026 01:08:22 +0000 (0:00:02.356) 0:00:43.443 ******* 2026-02-02 01:11:03.334418 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:11:03.334425 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:11:03.334432 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:11:03.334439 | orchestrator | 2026-02-02 01:11:03.334446 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-02 01:11:03.334456 | orchestrator | Monday 02 February 2026 01:08:22 +0000 (0:00:00.522) 0:00:43.966 ******* 2026-02-02 01:11:03.334469 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:11:03.334492 | orchestrator | 2026-02-02 01:11:03.334504 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-02 01:11:03.334515 | orchestrator | Monday 02 February 2026 01:08:23 +0000 (0:00:01.165) 0:00:45.131 ******* 2026-02-02 01:11:03.334526 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-02 01:11:03.334534 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-02 01:11:03.334541 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-02 01:11:03.334550 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-02 01:11:03.334562 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-02 01:11:03.334573 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-02 01:11:03.334584 | orchestrator | 2026-02-02 01:11:03.334594 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-02 01:11:03.334601 | orchestrator | Monday 02 February 2026 01:08:26 +0000 (0:00:02.308) 0:00:47.440 ******* 2026-02-02 01:11:03.334614 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-02-02 01:11:03.334625 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-02-02 01:11:03.334663 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-02-02 01:11:03.334683 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-02-02 01:11:03.334698 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-02-02 01:11:03.334707 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-02-02 01:11:03.334716 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-02-02 01:11:03.334744 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-02-02 01:11:03.334763 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-02-02 01:11:03.334771 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-02-02 01:11:03.334779 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-02-02 01:11:03.334805 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-02-02 01:11:03.334820 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-02-02 01:11:03.334832 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-02-02 01:11:03.334840 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-02-02 01:11:03.334847 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-02-02 01:11:03.334874 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-02-02 01:11:03.334888 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-02-02 01:11:03.334899 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-02-02 01:11:03.334906 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-02-02 01:11:03.334914 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-02-02 01:11:03.334938 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-02-02 01:11:03.334952 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-02-02 01:11:03.334962 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-02-02 01:11:03.334969 | orchestrator | 2026-02-02 01:11:03.334976 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-02 01:11:03.334983 | orchestrator | Monday 02 February 2026 01:08:35 +0000 (0:00:09.056) 0:00:56.496 ******* 2026-02-02 01:11:03.334990 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-02-02 01:11:03.334999 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-02-02 01:11:03.335005 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-02-02 01:11:03.335012 | orchestrator | 2026-02-02 01:11:03.335019 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-02 01:11:03.335026 | orchestrator | Monday 02 February 2026 01:08:38 +0000 (0:00:03.247) 0:00:59.743 ******* 2026-02-02 01:11:03.335033 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-02-02 01:11:03.335039 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-02-02 01:11:03.335046 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-02-02 01:11:03.335058 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-02-02 01:11:03.335064 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-02-02 01:11:03.335071 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-02-02 01:11:03.335078 | orchestrator | 2026-02-02 01:11:03.335085 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-02 01:11:03.335092 | orchestrator | Monday 02 February 2026 01:08:42 +0000 (0:00:03.676) 0:01:03.420 ******* 2026-02-02 01:11:03.335099 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-02 01:11:03.335106 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-02 01:11:03.335112 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-02 01:11:03.335137 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-02 01:11:03.335148 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-02 01:11:03.335160 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-02 01:11:03.335171 | orchestrator | 2026-02-02 01:11:03.335183 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-02 01:11:03.335193 | orchestrator | Monday 02 February 2026 01:08:43 +0000 (0:00:00.918) 0:01:04.339 ******* 2026-02-02 01:11:03.335200 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:11:03.335207 | orchestrator | 2026-02-02 01:11:03.335214 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-02 01:11:03.335220 | orchestrator | Monday 02 February 2026 01:08:43 +0000 (0:00:00.109) 0:01:04.448 ******* 2026-02-02 01:11:03.335227 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:11:03.335234 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:11:03.335241 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:11:03.335247 | orchestrator | 2026-02-02 01:11:03.335254 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-02 01:11:03.335261 | orchestrator | Monday 02 February 2026 01:08:43 +0000 (0:00:00.340) 0:01:04.789 ******* 2026-02-02 01:11:03.335267 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:11:03.335275 | orchestrator | 2026-02-02 01:11:03.335281 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-02 01:11:03.335288 | orchestrator | Monday 02 February 2026 01:08:44 +0000 (0:00:00.861) 0:01:05.650 ******* 2026-02-02 01:11:03.335300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.335308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.335344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.335407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.335425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.335436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.335444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.335457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.335464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.335499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.335517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.335541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.335552 | orchestrator | 2026-02-02 01:11:03.335562 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-02 01:11:03.335580 | orchestrator | Monday 02 February 2026 01:08:48 +0000 (0:00:04.394) 0:01:10.045 ******* 2026-02-02 01:11:03.335592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:11:03.335604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335675 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:11:03.335693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:11:03.335712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335757 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:11:03.335764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:11:03.335776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335804 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:11:03.335810 | orchestrator | 2026-02-02 01:11:03.335817 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-02 01:11:03.335824 | orchestrator | Monday 02 February 2026 01:08:50 +0000 (0:00:01.402) 0:01:11.447 ******* 2026-02-02 01:11:03.335837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:11:03.335845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335891 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:11:03.335903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:11:03.335916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335952 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:11:03.335963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:11:03.335971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.335998 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:11:03.336005 | orchestrator | 2026-02-02 01:11:03.336012 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-02 01:11:03.336019 | orchestrator | Monday 02 February 2026 01:08:52 +0000 (0:00:02.578) 0:01:14.026 ******* 2026-02-02 01:11:03.336026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.336043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.336052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.336063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336175 | orchestrator | 2026-02-02 01:11:03.336186 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-02 01:11:03.336198 | orchestrator | Monday 02 February 2026 01:08:57 +0000 (0:00:05.117) 0:01:19.143 ******* 2026-02-02 01:11:03.336213 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-02-02 01:11:03.336220 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:11:03.336227 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-02-02 01:11:03.336234 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:11:03.336241 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-02-02 01:11:03.336248 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:11:03.336255 | orchestrator | 2026-02-02 01:11:03.336261 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-02-02 01:11:03.336268 | orchestrator | Monday 02 February 2026 01:08:58 +0000 (0:00:00.792) 0:01:19.936 ******* 2026-02-02 01:11:03.336275 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:11:03.336282 | orchestrator | 2026-02-02 01:11:03.336289 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-02-02 01:11:03.336295 | orchestrator | Monday 02 February 2026 01:09:00 +0000 (0:00:01.434) 0:01:21.370 ******* 2026-02-02 01:11:03.336302 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:11:03.336309 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:11:03.336316 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:11:03.336323 | orchestrator | 2026-02-02 01:11:03.336329 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-02 01:11:03.336336 | orchestrator | Monday 02 February 2026 01:09:02 +0000 (0:00:02.334) 0:01:23.704 ******* 2026-02-02 01:11:03.336344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.336381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.336399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.336407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336483 | orchestrator | 2026-02-02 01:11:03.336491 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-02 01:11:03.336502 | orchestrator | Monday 02 February 2026 01:09:17 +0000 (0:00:14.998) 0:01:38.702 ******* 2026-02-02 01:11:03.336509 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:11:03.336516 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:11:03.336523 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:11:03.336530 | orchestrator | 2026-02-02 01:11:03.336537 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-02 01:11:03.336544 | orchestrator | Monday 02 February 2026 01:09:18 +0000 (0:00:01.606) 0:01:40.308 ******* 2026-02-02 01:11:03.336556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:11:03.336564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.336575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.336582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.336590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:11:03.336606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.336614 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:11:03.336621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.336632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.336639 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:11:03.336646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:11:03.336654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.336673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.336685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.336697 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:11:03.336708 | orchestrator | 2026-02-02 01:11:03.336719 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-02 01:11:03.336730 | orchestrator | Monday 02 February 2026 01:09:20 +0000 (0:00:01.197) 0:01:41.506 ******* 2026-02-02 01:11:03.336741 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:11:03.336752 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:11:03.336761 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:11:03.336771 | orchestrator | 2026-02-02 01:11:03.336782 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-02-02 01:11:03.336792 | orchestrator | Monday 02 February 2026 01:09:20 +0000 (0:00:00.357) 0:01:41.863 ******* 2026-02-02 01:11:03.336809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.336823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.336848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:11:03.336861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 01:11:03.336983 | orchestrator | 2026-02-02 01:11:03.336990 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-02-02 01:11:03.336997 | orchestrator | Monday 02 February 2026 01:09:23 +0000 (0:00:03.230) 0:01:45.093 ******* 2026-02-02 01:11:03.337004 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:11:03.337011 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:11:03.337019 | orchestrator | } 2026-02-02 01:11:03.337026 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:11:03.337032 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:11:03.337039 | orchestrator | } 2026-02-02 01:11:03.337046 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:11:03.337053 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:11:03.337060 | orchestrator | } 2026-02-02 01:11:03.337071 | orchestrator | 2026-02-02 01:11:03.337082 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:11:03.337093 | orchestrator | Monday 02 February 2026 01:09:24 +0000 (0:00:01.026) 0:01:46.119 ******* 2026-02-02 01:11:03.337105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:11:03.337123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.337132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.337142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.337155 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:11:03.337162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:11:03.337170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.337182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.337189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.337196 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:11:03.337207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:11:03.337219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.337226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.337233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 01:11:03.337240 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:11:03.337247 | orchestrator | 2026-02-02 01:11:03.337258 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-02 01:11:03.337265 | orchestrator | Monday 02 February 2026 01:09:26 +0000 (0:00:01.269) 0:01:47.388 ******* 2026-02-02 01:11:03.337272 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:11:03.337281 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:11:03.337292 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:11:03.337302 | orchestrator | 2026-02-02 01:11:03.337313 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-02 01:11:03.337326 | orchestrator | Monday 02 February 2026 01:09:26 +0000 (0:00:00.309) 0:01:47.698 ******* 2026-02-02 01:11:03.337338 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:11:03.337349 | orchestrator | 2026-02-02 01:11:03.337379 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-02 01:11:03.337390 | orchestrator | Monday 02 February 2026 01:09:28 +0000 (0:00:02.296) 0:01:49.995 ******* 2026-02-02 01:11:03.337401 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:11:03.337414 | orchestrator | 2026-02-02 01:11:03.337426 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-02 01:11:03.337439 | orchestrator | Monday 02 February 2026 01:09:31 +0000 (0:00:02.942) 0:01:52.937 ******* 2026-02-02 01:11:03.337450 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:11:03.337462 | orchestrator | 2026-02-02 01:11:03.337469 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-02 01:11:03.337482 | orchestrator | Monday 02 February 2026 01:09:53 +0000 (0:00:21.397) 0:02:14.335 ******* 2026-02-02 01:11:03.337489 | orchestrator | 2026-02-02 01:11:03.337496 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-02 01:11:03.337503 | orchestrator | Monday 02 February 2026 01:09:53 +0000 (0:00:00.097) 0:02:14.432 ******* 2026-02-02 01:11:03.337510 | orchestrator | 2026-02-02 01:11:03.337521 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-02 01:11:03.337534 | orchestrator | Monday 02 February 2026 01:09:53 +0000 (0:00:00.072) 0:02:14.504 ******* 2026-02-02 01:11:03.337546 | orchestrator | 2026-02-02 01:11:03.337559 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-02 01:11:03.337570 | orchestrator | Monday 02 February 2026 01:09:53 +0000 (0:00:00.070) 0:02:14.575 ******* 2026-02-02 01:11:03.337581 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:11:03.337611 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:11:03.337621 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:11:03.337628 | orchestrator | 2026-02-02 01:11:03.337640 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-02 01:11:03.337647 | orchestrator | Monday 02 February 2026 01:10:16 +0000 (0:00:23.003) 0:02:37.579 ******* 2026-02-02 01:11:03.337654 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:11:03.337661 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:11:03.337668 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:11:03.337674 | orchestrator | 2026-02-02 01:11:03.337681 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-02 01:11:03.337688 | orchestrator | Monday 02 February 2026 01:10:23 +0000 (0:00:06.815) 0:02:44.394 ******* 2026-02-02 01:11:03.337695 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:11:03.337701 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:11:03.337708 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:11:03.337715 | orchestrator | 2026-02-02 01:11:03.337722 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-02 01:11:03.337728 | orchestrator | Monday 02 February 2026 01:10:51 +0000 (0:00:28.346) 0:03:12.741 ******* 2026-02-02 01:11:03.337735 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:11:03.337742 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:11:03.337749 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:11:03.337756 | orchestrator | 2026-02-02 01:11:03.337762 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-02 01:11:03.337769 | orchestrator | Monday 02 February 2026 01:11:01 +0000 (0:00:10.540) 0:03:23.282 ******* 2026-02-02 01:11:03.337776 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:11:03.337783 | orchestrator | 2026-02-02 01:11:03.337790 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:11:03.337797 | orchestrator | testbed-node-0 : ok=32  changed=23  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-02 01:11:03.337806 | orchestrator | testbed-node-1 : ok=23  changed=16  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-02 01:11:03.337813 | orchestrator | testbed-node-2 : ok=23  changed=16  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-02 01:11:03.337820 | orchestrator | 2026-02-02 01:11:03.337827 | orchestrator | 2026-02-02 01:11:03.337834 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:11:03.337840 | orchestrator | Monday 02 February 2026 01:11:02 +0000 (0:00:00.259) 0:03:23.541 ******* 2026-02-02 01:11:03.337847 | orchestrator | =============================================================================== 2026-02-02 01:11:03.337854 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 28.35s 2026-02-02 01:11:03.337861 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.00s 2026-02-02 01:11:03.337867 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.40s 2026-02-02 01:11:03.337882 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 15.00s 2026-02-02 01:11:03.337893 | orchestrator | service-ks-register : cinder | Creating/deleting endpoints ------------- 13.51s 2026-02-02 01:11:03.337904 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.54s 2026-02-02 01:11:03.337915 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 9.06s 2026-02-02 01:11:03.337926 | orchestrator | service-ks-register : cinder | Granting/revoking user roles ------------- 7.72s 2026-02-02 01:11:03.337942 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.82s 2026-02-02 01:11:03.337949 | orchestrator | service-ks-register : cinder | Creating/deleting services --------------- 6.76s 2026-02-02 01:11:03.337956 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.12s 2026-02-02 01:11:03.337963 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.39s 2026-02-02 01:11:03.337970 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 4.02s 2026-02-02 01:11:03.337976 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.97s 2026-02-02 01:11:03.337986 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.68s 2026-02-02 01:11:03.337996 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.45s 2026-02-02 01:11:03.338010 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.25s 2026-02-02 01:11:03.338067 | orchestrator | service-check-containers : cinder | Check containers -------------------- 3.23s 2026-02-02 01:11:03.338081 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.94s 2026-02-02 01:11:03.338093 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.58s 2026-02-02 01:11:03.338105 | orchestrator | 2026-02-02 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:06.377265 | orchestrator | 2026-02-02 01:11:06 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:06.377638 | orchestrator | 2026-02-02 01:11:06 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:06.378747 | orchestrator | 2026-02-02 01:11:06 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:06.378766 | orchestrator | 2026-02-02 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:09.427000 | orchestrator | 2026-02-02 01:11:09 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:09.428739 | orchestrator | 2026-02-02 01:11:09 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:09.430668 | orchestrator | 2026-02-02 01:11:09 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:09.431490 | orchestrator | 2026-02-02 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:12.472271 | orchestrator | 2026-02-02 01:11:12 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:12.474210 | orchestrator | 2026-02-02 01:11:12 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:12.476784 | orchestrator | 2026-02-02 01:11:12 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:12.477135 | orchestrator | 2026-02-02 01:11:12 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:15.524262 | orchestrator | 2026-02-02 01:11:15 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:15.526512 | orchestrator | 2026-02-02 01:11:15 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:15.528783 | orchestrator | 2026-02-02 01:11:15 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:15.528868 | orchestrator | 2026-02-02 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:18.572851 | orchestrator | 2026-02-02 01:11:18 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:18.574424 | orchestrator | 2026-02-02 01:11:18 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:18.575946 | orchestrator | 2026-02-02 01:11:18 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:18.576003 | orchestrator | 2026-02-02 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:21.611621 | orchestrator | 2026-02-02 01:11:21 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:21.612751 | orchestrator | 2026-02-02 01:11:21 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:21.614594 | orchestrator | 2026-02-02 01:11:21 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:21.614625 | orchestrator | 2026-02-02 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:24.657607 | orchestrator | 2026-02-02 01:11:24 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:24.658602 | orchestrator | 2026-02-02 01:11:24 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:24.660043 | orchestrator | 2026-02-02 01:11:24 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:24.660093 | orchestrator | 2026-02-02 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:27.701525 | orchestrator | 2026-02-02 01:11:27 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:27.703837 | orchestrator | 2026-02-02 01:11:27 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:27.705609 | orchestrator | 2026-02-02 01:11:27 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:27.705652 | orchestrator | 2026-02-02 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:30.744469 | orchestrator | 2026-02-02 01:11:30 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:30.745652 | orchestrator | 2026-02-02 01:11:30 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:30.746846 | orchestrator | 2026-02-02 01:11:30 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:30.746888 | orchestrator | 2026-02-02 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:33.795746 | orchestrator | 2026-02-02 01:11:33 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:33.796258 | orchestrator | 2026-02-02 01:11:33 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:33.797559 | orchestrator | 2026-02-02 01:11:33 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:33.797611 | orchestrator | 2026-02-02 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:36.844428 | orchestrator | 2026-02-02 01:11:36 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:36.846693 | orchestrator | 2026-02-02 01:11:36 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:36.848574 | orchestrator | 2026-02-02 01:11:36 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:36.848667 | orchestrator | 2026-02-02 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:39.890810 | orchestrator | 2026-02-02 01:11:39 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:39.891984 | orchestrator | 2026-02-02 01:11:39 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:39.893947 | orchestrator | 2026-02-02 01:11:39 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:39.893994 | orchestrator | 2026-02-02 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:42.939633 | orchestrator | 2026-02-02 01:11:42 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:42.941710 | orchestrator | 2026-02-02 01:11:42 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:42.944747 | orchestrator | 2026-02-02 01:11:42 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:42.944811 | orchestrator | 2026-02-02 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:45.991724 | orchestrator | 2026-02-02 01:11:45 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:45.991943 | orchestrator | 2026-02-02 01:11:45 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:45.993429 | orchestrator | 2026-02-02 01:11:45 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:45.993477 | orchestrator | 2026-02-02 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:49.044186 | orchestrator | 2026-02-02 01:11:49 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:49.044286 | orchestrator | 2026-02-02 01:11:49 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:49.047013 | orchestrator | 2026-02-02 01:11:49 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:49.047058 | orchestrator | 2026-02-02 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:52.102603 | orchestrator | 2026-02-02 01:11:52 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:52.104450 | orchestrator | 2026-02-02 01:11:52 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:52.104512 | orchestrator | 2026-02-02 01:11:52 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:52.104526 | orchestrator | 2026-02-02 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:55.142198 | orchestrator | 2026-02-02 01:11:55 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:55.143822 | orchestrator | 2026-02-02 01:11:55 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:55.145451 | orchestrator | 2026-02-02 01:11:55 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:55.145494 | orchestrator | 2026-02-02 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:11:58.192809 | orchestrator | 2026-02-02 01:11:58 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:11:58.194489 | orchestrator | 2026-02-02 01:11:58 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:11:58.198939 | orchestrator | 2026-02-02 01:11:58 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:11:58.199031 | orchestrator | 2026-02-02 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:01.233058 | orchestrator | 2026-02-02 01:12:01 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:01.234336 | orchestrator | 2026-02-02 01:12:01 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:01.235050 | orchestrator | 2026-02-02 01:12:01 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:12:01.235330 | orchestrator | 2026-02-02 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:04.285193 | orchestrator | 2026-02-02 01:12:04 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:04.287535 | orchestrator | 2026-02-02 01:12:04 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:04.290325 | orchestrator | 2026-02-02 01:12:04 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state STARTED 2026-02-02 01:12:04.290360 | orchestrator | 2026-02-02 01:12:04 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:07.328378 | orchestrator | 2026-02-02 01:12:07 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:07.329539 | orchestrator | 2026-02-02 01:12:07 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:07.330695 | orchestrator | 2026-02-02 01:12:07 | INFO  | Task b5e09eaa-52a8-4f71-864b-650e92d48595 is in state SUCCESS 2026-02-02 01:12:07.331691 | orchestrator | 2026-02-02 01:12:07.331730 | orchestrator | 2026-02-02 01:12:07.331743 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:12:07.331756 | orchestrator | 2026-02-02 01:12:07.331767 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:12:07.331779 | orchestrator | Monday 02 February 2026 01:10:48 +0000 (0:00:00.265) 0:00:00.265 ******* 2026-02-02 01:12:07.331791 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:12:07.331809 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:12:07.331825 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:12:07.331837 | orchestrator | 2026-02-02 01:12:07.331848 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:12:07.331859 | orchestrator | Monday 02 February 2026 01:10:48 +0000 (0:00:00.317) 0:00:00.583 ******* 2026-02-02 01:12:07.331870 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-02 01:12:07.331882 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-02 01:12:07.331893 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-02 01:12:07.331904 | orchestrator | 2026-02-02 01:12:07.331915 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-02 01:12:07.331926 | orchestrator | 2026-02-02 01:12:07.331937 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-02 01:12:07.331947 | orchestrator | Monday 02 February 2026 01:10:49 +0000 (0:00:00.469) 0:00:01.052 ******* 2026-02-02 01:12:07.331958 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:12:07.331971 | orchestrator | 2026-02-02 01:12:07.331982 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-02 01:12:07.331993 | orchestrator | Monday 02 February 2026 01:10:49 +0000 (0:00:00.571) 0:00:01.624 ******* 2026-02-02 01:12:07.332008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.332052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.332079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.332091 | orchestrator | 2026-02-02 01:12:07.332102 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-02 01:12:07.332114 | orchestrator | Monday 02 February 2026 01:10:50 +0000 (0:00:00.723) 0:00:02.348 ******* 2026-02-02 01:12:07.332125 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 01:12:07.332136 | orchestrator | 2026-02-02 01:12:07.332147 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-02 01:12:07.332159 | orchestrator | Monday 02 February 2026 01:10:51 +0000 (0:00:00.967) 0:00:03.315 ******* 2026-02-02 01:12:07.332170 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:12:07.332181 | orchestrator | 2026-02-02 01:12:07.332192 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-02 01:12:07.332217 | orchestrator | Monday 02 February 2026 01:10:52 +0000 (0:00:00.808) 0:00:04.123 ******* 2026-02-02 01:12:07.332229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.332270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.332300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.332320 | orchestrator | 2026-02-02 01:12:07.332340 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-02 01:12:07.332357 | orchestrator | Monday 02 February 2026 01:10:53 +0000 (0:00:01.496) 0:00:05.620 ******* 2026-02-02 01:12:07.332376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:12:07.332396 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:12:07.332438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:12:07.332459 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:12:07.332488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:12:07.332503 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:12:07.332515 | orchestrator | 2026-02-02 01:12:07.332528 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-02 01:12:07.332541 | orchestrator | Monday 02 February 2026 01:10:54 +0000 (0:00:00.471) 0:00:06.091 ******* 2026-02-02 01:12:07.332555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:12:07.332577 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:12:07.332590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:12:07.332602 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:12:07.332615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:12:07.332628 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:12:07.332642 | orchestrator | 2026-02-02 01:12:07.332654 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-02 01:12:07.332673 | orchestrator | Monday 02 February 2026 01:10:55 +0000 (0:00:00.981) 0:00:07.073 ******* 2026-02-02 01:12:07.332691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.332704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.332724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.332736 | orchestrator | 2026-02-02 01:12:07.332747 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-02 01:12:07.332758 | orchestrator | Monday 02 February 2026 01:10:56 +0000 (0:00:01.239) 0:00:08.312 ******* 2026-02-02 01:12:07.332769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.332781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.332797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.332809 | orchestrator | 2026-02-02 01:12:07.332820 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-02 01:12:07.332837 | orchestrator | Monday 02 February 2026 01:10:57 +0000 (0:00:01.300) 0:00:09.613 ******* 2026-02-02 01:12:07.332849 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:12:07.332860 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:12:07.332871 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:12:07.332882 | orchestrator | 2026-02-02 01:12:07.332893 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-02 01:12:07.332911 | orchestrator | Monday 02 February 2026 01:10:58 +0000 (0:00:00.539) 0:00:10.153 ******* 2026-02-02 01:12:07.332922 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-02 01:12:07.332933 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-02 01:12:07.332944 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-02 01:12:07.332955 | orchestrator | 2026-02-02 01:12:07.332966 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-02 01:12:07.332977 | orchestrator | Monday 02 February 2026 01:10:59 +0000 (0:00:01.268) 0:00:11.421 ******* 2026-02-02 01:12:07.332988 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-02 01:12:07.332999 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-02 01:12:07.333010 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-02 01:12:07.333021 | orchestrator | 2026-02-02 01:12:07.333032 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-02-02 01:12:07.333043 | orchestrator | Monday 02 February 2026 01:11:01 +0000 (0:00:01.337) 0:00:12.759 ******* 2026-02-02 01:12:07.333054 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 01:12:07.333065 | orchestrator | 2026-02-02 01:12:07.333076 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-02-02 01:12:07.333087 | orchestrator | Monday 02 February 2026 01:11:01 +0000 (0:00:00.842) 0:00:13.602 ******* 2026-02-02 01:12:07.333098 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:12:07.333109 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:12:07.333120 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:12:07.333131 | orchestrator | 2026-02-02 01:12:07.333141 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-02 01:12:07.333152 | orchestrator | Monday 02 February 2026 01:11:02 +0000 (0:00:00.770) 0:00:14.372 ******* 2026-02-02 01:12:07.333163 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:12:07.333174 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:12:07.333185 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:12:07.333197 | orchestrator | 2026-02-02 01:12:07.333208 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-02-02 01:12:07.333219 | orchestrator | Monday 02 February 2026 01:11:04 +0000 (0:00:01.533) 0:00:15.906 ******* 2026-02-02 01:12:07.333230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.333279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.333308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:12:07.333321 | orchestrator | 2026-02-02 01:12:07.333332 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-02-02 01:12:07.333343 | orchestrator | Monday 02 February 2026 01:11:05 +0000 (0:00:01.321) 0:00:17.227 ******* 2026-02-02 01:12:07.333354 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:12:07.333365 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:12:07.333376 | orchestrator | } 2026-02-02 01:12:07.333387 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:12:07.333398 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:12:07.333409 | orchestrator | } 2026-02-02 01:12:07.333420 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:12:07.333431 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:12:07.333442 | orchestrator | } 2026-02-02 01:12:07.333453 | orchestrator | 2026-02-02 01:12:07.333464 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:12:07.333475 | orchestrator | Monday 02 February 2026 01:11:05 +0000 (0:00:00.334) 0:00:17.561 ******* 2026-02-02 01:12:07.333487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:12:07.333499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:12:07.333510 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:12:07.333521 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:12:07.333532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:12:07.333550 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:12:07.333561 | orchestrator | 2026-02-02 01:12:07.333572 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-02 01:12:07.333588 | orchestrator | Monday 02 February 2026 01:11:06 +0000 (0:00:00.838) 0:00:18.399 ******* 2026-02-02 01:12:07.333599 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:12:07.333610 | orchestrator | 2026-02-02 01:12:07.333621 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-02 01:12:07.333632 | orchestrator | Monday 02 February 2026 01:11:08 +0000 (0:00:02.304) 0:00:20.703 ******* 2026-02-02 01:12:07.333643 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:12:07.333654 | orchestrator | 2026-02-02 01:12:07.333665 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-02 01:12:07.333676 | orchestrator | Monday 02 February 2026 01:11:11 +0000 (0:00:02.328) 0:00:23.032 ******* 2026-02-02 01:12:07.333687 | orchestrator | 2026-02-02 01:12:07.333698 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-02 01:12:07.333787 | orchestrator | Monday 02 February 2026 01:11:11 +0000 (0:00:00.068) 0:00:23.101 ******* 2026-02-02 01:12:07.333799 | orchestrator | 2026-02-02 01:12:07.333810 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-02 01:12:07.333828 | orchestrator | Monday 02 February 2026 01:11:11 +0000 (0:00:00.067) 0:00:23.168 ******* 2026-02-02 01:12:07.333839 | orchestrator | 2026-02-02 01:12:07.333851 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-02 01:12:07.333862 | orchestrator | Monday 02 February 2026 01:11:11 +0000 (0:00:00.072) 0:00:23.241 ******* 2026-02-02 01:12:07.333873 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:12:07.333884 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:12:07.333895 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:12:07.333906 | orchestrator | 2026-02-02 01:12:07.333916 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-02 01:12:07.333927 | orchestrator | Monday 02 February 2026 01:11:18 +0000 (0:00:06.953) 0:00:30.195 ******* 2026-02-02 01:12:07.333938 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:12:07.333949 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:12:07.333960 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-02 01:12:07.333971 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:12:07.333982 | orchestrator | 2026-02-02 01:12:07.333993 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-02 01:12:07.334004 | orchestrator | Monday 02 February 2026 01:11:32 +0000 (0:00:14.422) 0:00:44.617 ******* 2026-02-02 01:12:07.334073 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:12:07.334089 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:12:07.334100 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:12:07.334111 | orchestrator | 2026-02-02 01:12:07.334122 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-02 01:12:07.334133 | orchestrator | Monday 02 February 2026 01:11:59 +0000 (0:00:26.216) 0:01:10.834 ******* 2026-02-02 01:12:07.334144 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:12:07.334155 | orchestrator | 2026-02-02 01:12:07.334166 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-02 01:12:07.334177 | orchestrator | Monday 02 February 2026 01:12:01 +0000 (0:00:02.475) 0:01:13.309 ******* 2026-02-02 01:12:07.334188 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:12:07.334199 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:12:07.334210 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:12:07.334229 | orchestrator | 2026-02-02 01:12:07.334396 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-02 01:12:07.334454 | orchestrator | Monday 02 February 2026 01:12:01 +0000 (0:00:00.345) 0:01:13.654 ******* 2026-02-02 01:12:07.334468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-02 01:12:07.334483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-02 01:12:07.334497 | orchestrator | 2026-02-02 01:12:07.334509 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-02 01:12:07.334520 | orchestrator | Monday 02 February 2026 01:12:04 +0000 (0:00:02.496) 0:01:16.151 ******* 2026-02-02 01:12:07.334555 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:12:07.334567 | orchestrator | 2026-02-02 01:12:07.334578 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:12:07.334591 | orchestrator | testbed-node-0 : ok=22  changed=13  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 01:12:07.334603 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 01:12:07.334615 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 01:12:07.334625 | orchestrator | 2026-02-02 01:12:07.334637 | orchestrator | 2026-02-02 01:12:07.334648 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:12:07.334659 | orchestrator | Monday 02 February 2026 01:12:04 +0000 (0:00:00.298) 0:01:16.449 ******* 2026-02-02 01:12:07.334670 | orchestrator | =============================================================================== 2026-02-02 01:12:07.334680 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 26.22s 2026-02-02 01:12:07.334701 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 14.42s 2026-02-02 01:12:07.334713 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.95s 2026-02-02 01:12:07.334724 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.50s 2026-02-02 01:12:07.334736 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.48s 2026-02-02 01:12:07.334748 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.33s 2026-02-02 01:12:07.334759 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.30s 2026-02-02 01:12:07.334771 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.53s 2026-02-02 01:12:07.334783 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.50s 2026-02-02 01:12:07.334796 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.34s 2026-02-02 01:12:07.334822 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.32s 2026-02-02 01:12:07.334836 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.30s 2026-02-02 01:12:07.334849 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.27s 2026-02-02 01:12:07.334862 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.24s 2026-02-02 01:12:07.334874 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.98s 2026-02-02 01:12:07.334887 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.97s 2026-02-02 01:12:07.334912 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.84s 2026-02-02 01:12:07.334925 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.84s 2026-02-02 01:12:07.334936 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.81s 2026-02-02 01:12:07.334944 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.77s 2026-02-02 01:12:07.334952 | orchestrator | 2026-02-02 01:12:07 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:10.369076 | orchestrator | 2026-02-02 01:12:10 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:10.370313 | orchestrator | 2026-02-02 01:12:10 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:10.370383 | orchestrator | 2026-02-02 01:12:10 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:13.414348 | orchestrator | 2026-02-02 01:12:13 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:13.414924 | orchestrator | 2026-02-02 01:12:13 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:13.414966 | orchestrator | 2026-02-02 01:12:13 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:16.455209 | orchestrator | 2026-02-02 01:12:16 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:16.456679 | orchestrator | 2026-02-02 01:12:16 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:16.457212 | orchestrator | 2026-02-02 01:12:16 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:19.498118 | orchestrator | 2026-02-02 01:12:19 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:19.499136 | orchestrator | 2026-02-02 01:12:19 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:19.499290 | orchestrator | 2026-02-02 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:22.539561 | orchestrator | 2026-02-02 01:12:22 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:22.539896 | orchestrator | 2026-02-02 01:12:22 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:22.539928 | orchestrator | 2026-02-02 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:25.585808 | orchestrator | 2026-02-02 01:12:25 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:25.587620 | orchestrator | 2026-02-02 01:12:25 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:25.588063 | orchestrator | 2026-02-02 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:28.630497 | orchestrator | 2026-02-02 01:12:28 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:28.632361 | orchestrator | 2026-02-02 01:12:28 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:28.632379 | orchestrator | 2026-02-02 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:31.676818 | orchestrator | 2026-02-02 01:12:31 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:31.678365 | orchestrator | 2026-02-02 01:12:31 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:31.678434 | orchestrator | 2026-02-02 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:34.717477 | orchestrator | 2026-02-02 01:12:34 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:34.719448 | orchestrator | 2026-02-02 01:12:34 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:34.719506 | orchestrator | 2026-02-02 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:37.782057 | orchestrator | 2026-02-02 01:12:37 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:37.783458 | orchestrator | 2026-02-02 01:12:37 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:37.783518 | orchestrator | 2026-02-02 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:40.824871 | orchestrator | 2026-02-02 01:12:40 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:40.826620 | orchestrator | 2026-02-02 01:12:40 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:40.826676 | orchestrator | 2026-02-02 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:43.870416 | orchestrator | 2026-02-02 01:12:43 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:43.870550 | orchestrator | 2026-02-02 01:12:43 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state STARTED 2026-02-02 01:12:43.870576 | orchestrator | 2026-02-02 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:46.922139 | orchestrator | 2026-02-02 01:12:46 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:46.924035 | orchestrator | 2026-02-02 01:12:46 | INFO  | Task dd1fa119-9242-4839-af4f-7f0b63cb289b is in state SUCCESS 2026-02-02 01:12:46.925344 | orchestrator | 2026-02-02 01:12:46 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:12:46.925378 | orchestrator | 2026-02-02 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:49.966285 | orchestrator | 2026-02-02 01:12:49 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:49.966365 | orchestrator | 2026-02-02 01:12:49 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:12:49.966371 | orchestrator | 2026-02-02 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:53.008010 | orchestrator | 2026-02-02 01:12:53 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:53.010447 | orchestrator | 2026-02-02 01:12:53 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:12:53.010532 | orchestrator | 2026-02-02 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:56.058741 | orchestrator | 2026-02-02 01:12:56 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:56.060358 | orchestrator | 2026-02-02 01:12:56 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:12:56.060390 | orchestrator | 2026-02-02 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:12:59.097835 | orchestrator | 2026-02-02 01:12:59 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:12:59.098520 | orchestrator | 2026-02-02 01:12:59 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:12:59.098559 | orchestrator | 2026-02-02 01:12:59 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:02.136788 | orchestrator | 2026-02-02 01:13:02 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:02.137982 | orchestrator | 2026-02-02 01:13:02 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:02.138056 | orchestrator | 2026-02-02 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:05.182516 | orchestrator | 2026-02-02 01:13:05 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:05.184677 | orchestrator | 2026-02-02 01:13:05 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:05.184797 | orchestrator | 2026-02-02 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:08.228290 | orchestrator | 2026-02-02 01:13:08 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:08.228381 | orchestrator | 2026-02-02 01:13:08 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:08.228391 | orchestrator | 2026-02-02 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:11.280036 | orchestrator | 2026-02-02 01:13:11 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:11.280992 | orchestrator | 2026-02-02 01:13:11 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:11.281021 | orchestrator | 2026-02-02 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:14.339511 | orchestrator | 2026-02-02 01:13:14 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:14.341754 | orchestrator | 2026-02-02 01:13:14 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:14.341798 | orchestrator | 2026-02-02 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:17.401337 | orchestrator | 2026-02-02 01:13:17 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:17.404596 | orchestrator | 2026-02-02 01:13:17 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:17.404637 | orchestrator | 2026-02-02 01:13:17 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:20.459616 | orchestrator | 2026-02-02 01:13:20 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:20.461962 | orchestrator | 2026-02-02 01:13:20 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:20.462002 | orchestrator | 2026-02-02 01:13:20 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:23.508546 | orchestrator | 2026-02-02 01:13:23 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:23.509401 | orchestrator | 2026-02-02 01:13:23 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:23.509450 | orchestrator | 2026-02-02 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:26.561100 | orchestrator | 2026-02-02 01:13:26 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:26.563283 | orchestrator | 2026-02-02 01:13:26 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:26.563312 | orchestrator | 2026-02-02 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:29.614395 | orchestrator | 2026-02-02 01:13:29 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:29.614908 | orchestrator | 2026-02-02 01:13:29 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:29.615044 | orchestrator | 2026-02-02 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:32.660760 | orchestrator | 2026-02-02 01:13:32 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:32.661545 | orchestrator | 2026-02-02 01:13:32 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:32.661634 | orchestrator | 2026-02-02 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:35.703772 | orchestrator | 2026-02-02 01:13:35 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:35.705503 | orchestrator | 2026-02-02 01:13:35 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:35.705566 | orchestrator | 2026-02-02 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:38.751597 | orchestrator | 2026-02-02 01:13:38 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:38.753524 | orchestrator | 2026-02-02 01:13:38 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:38.753620 | orchestrator | 2026-02-02 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:41.804501 | orchestrator | 2026-02-02 01:13:41 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:41.804861 | orchestrator | 2026-02-02 01:13:41 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:41.804893 | orchestrator | 2026-02-02 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:44.865395 | orchestrator | 2026-02-02 01:13:44 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:44.866379 | orchestrator | 2026-02-02 01:13:44 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:44.866537 | orchestrator | 2026-02-02 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:47.910258 | orchestrator | 2026-02-02 01:13:47 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:47.911157 | orchestrator | 2026-02-02 01:13:47 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:47.911175 | orchestrator | 2026-02-02 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:50.963897 | orchestrator | 2026-02-02 01:13:50 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:50.965225 | orchestrator | 2026-02-02 01:13:50 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:50.965279 | orchestrator | 2026-02-02 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:54.023152 | orchestrator | 2026-02-02 01:13:54 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:54.023236 | orchestrator | 2026-02-02 01:13:54 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:54.023247 | orchestrator | 2026-02-02 01:13:54 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:13:57.060058 | orchestrator | 2026-02-02 01:13:57 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:13:57.061178 | orchestrator | 2026-02-02 01:13:57 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:13:57.061288 | orchestrator | 2026-02-02 01:13:57 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:00.104879 | orchestrator | 2026-02-02 01:14:00 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:00.104975 | orchestrator | 2026-02-02 01:14:00 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:00.104997 | orchestrator | 2026-02-02 01:14:00 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:03.146877 | orchestrator | 2026-02-02 01:14:03 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:03.147277 | orchestrator | 2026-02-02 01:14:03 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:03.147852 | orchestrator | 2026-02-02 01:14:03 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:06.189423 | orchestrator | 2026-02-02 01:14:06 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:06.191586 | orchestrator | 2026-02-02 01:14:06 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:06.191655 | orchestrator | 2026-02-02 01:14:06 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:09.241236 | orchestrator | 2026-02-02 01:14:09 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:09.243321 | orchestrator | 2026-02-02 01:14:09 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:09.243448 | orchestrator | 2026-02-02 01:14:09 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:12.290317 | orchestrator | 2026-02-02 01:14:12 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:12.290404 | orchestrator | 2026-02-02 01:14:12 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:12.290414 | orchestrator | 2026-02-02 01:14:12 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:15.346513 | orchestrator | 2026-02-02 01:14:15 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:15.346614 | orchestrator | 2026-02-02 01:14:15 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:15.346630 | orchestrator | 2026-02-02 01:14:15 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:18.407701 | orchestrator | 2026-02-02 01:14:18 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:18.410757 | orchestrator | 2026-02-02 01:14:18 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:18.410849 | orchestrator | 2026-02-02 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:21.443809 | orchestrator | 2026-02-02 01:14:21 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:21.445095 | orchestrator | 2026-02-02 01:14:21 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:21.445185 | orchestrator | 2026-02-02 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:24.488018 | orchestrator | 2026-02-02 01:14:24 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:24.489643 | orchestrator | 2026-02-02 01:14:24 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:24.489706 | orchestrator | 2026-02-02 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:27.536813 | orchestrator | 2026-02-02 01:14:27 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:27.536896 | orchestrator | 2026-02-02 01:14:27 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:27.536907 | orchestrator | 2026-02-02 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:30.580577 | orchestrator | 2026-02-02 01:14:30 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:30.580646 | orchestrator | 2026-02-02 01:14:30 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:30.580671 | orchestrator | 2026-02-02 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:33.627111 | orchestrator | 2026-02-02 01:14:33 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:33.627448 | orchestrator | 2026-02-02 01:14:33 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:33.627478 | orchestrator | 2026-02-02 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:36.677185 | orchestrator | 2026-02-02 01:14:36 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:36.680303 | orchestrator | 2026-02-02 01:14:36 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:36.680431 | orchestrator | 2026-02-02 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:39.716326 | orchestrator | 2026-02-02 01:14:39 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:39.716958 | orchestrator | 2026-02-02 01:14:39 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:39.717010 | orchestrator | 2026-02-02 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:42.760041 | orchestrator | 2026-02-02 01:14:42 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:42.763271 | orchestrator | 2026-02-02 01:14:42 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:42.763324 | orchestrator | 2026-02-02 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:45.809028 | orchestrator | 2026-02-02 01:14:45 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:45.809722 | orchestrator | 2026-02-02 01:14:45 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:45.809828 | orchestrator | 2026-02-02 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:48.842849 | orchestrator | 2026-02-02 01:14:48 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:48.843549 | orchestrator | 2026-02-02 01:14:48 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:48.843574 | orchestrator | 2026-02-02 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:51.878060 | orchestrator | 2026-02-02 01:14:51 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:51.878157 | orchestrator | 2026-02-02 01:14:51 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:51.878169 | orchestrator | 2026-02-02 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:54.918495 | orchestrator | 2026-02-02 01:14:54 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:54.921628 | orchestrator | 2026-02-02 01:14:54 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:54.921707 | orchestrator | 2026-02-02 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:14:57.956633 | orchestrator | 2026-02-02 01:14:57 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:14:57.956997 | orchestrator | 2026-02-02 01:14:57 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:14:57.957165 | orchestrator | 2026-02-02 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:00.991700 | orchestrator | 2026-02-02 01:15:00 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:00.992109 | orchestrator | 2026-02-02 01:15:00 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:00.992214 | orchestrator | 2026-02-02 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:04.038978 | orchestrator | 2026-02-02 01:15:04 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:04.039074 | orchestrator | 2026-02-02 01:15:04 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:04.039090 | orchestrator | 2026-02-02 01:15:04 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:07.075146 | orchestrator | 2026-02-02 01:15:07 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:07.076202 | orchestrator | 2026-02-02 01:15:07 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:07.076264 | orchestrator | 2026-02-02 01:15:07 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:10.121708 | orchestrator | 2026-02-02 01:15:10 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:10.122697 | orchestrator | 2026-02-02 01:15:10 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:10.122761 | orchestrator | 2026-02-02 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:13.167637 | orchestrator | 2026-02-02 01:15:13 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:13.168470 | orchestrator | 2026-02-02 01:15:13 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:13.168503 | orchestrator | 2026-02-02 01:15:13 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:16.215738 | orchestrator | 2026-02-02 01:15:16 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:16.216625 | orchestrator | 2026-02-02 01:15:16 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:16.216669 | orchestrator | 2026-02-02 01:15:16 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:19.259103 | orchestrator | 2026-02-02 01:15:19 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:19.261967 | orchestrator | 2026-02-02 01:15:19 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:19.262127 | orchestrator | 2026-02-02 01:15:19 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:22.313069 | orchestrator | 2026-02-02 01:15:22 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:22.314613 | orchestrator | 2026-02-02 01:15:22 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:22.314699 | orchestrator | 2026-02-02 01:15:22 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:25.384969 | orchestrator | 2026-02-02 01:15:25 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:25.385055 | orchestrator | 2026-02-02 01:15:25 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:25.385068 | orchestrator | 2026-02-02 01:15:25 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:28.433535 | orchestrator | 2026-02-02 01:15:28 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:28.435647 | orchestrator | 2026-02-02 01:15:28 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:28.435742 | orchestrator | 2026-02-02 01:15:28 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:31.491511 | orchestrator | 2026-02-02 01:15:31 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:31.494784 | orchestrator | 2026-02-02 01:15:31 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:31.494859 | orchestrator | 2026-02-02 01:15:31 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:34.539257 | orchestrator | 2026-02-02 01:15:34 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:34.540852 | orchestrator | 2026-02-02 01:15:34 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:34.540921 | orchestrator | 2026-02-02 01:15:34 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:37.590126 | orchestrator | 2026-02-02 01:15:37 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:37.590217 | orchestrator | 2026-02-02 01:15:37 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:37.590231 | orchestrator | 2026-02-02 01:15:37 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:40.638506 | orchestrator | 2026-02-02 01:15:40 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:40.639282 | orchestrator | 2026-02-02 01:15:40 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:40.639340 | orchestrator | 2026-02-02 01:15:40 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:43.687295 | orchestrator | 2026-02-02 01:15:43 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:43.688796 | orchestrator | 2026-02-02 01:15:43 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:43.688827 | orchestrator | 2026-02-02 01:15:43 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:46.736941 | orchestrator | 2026-02-02 01:15:46 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:46.737088 | orchestrator | 2026-02-02 01:15:46 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:46.738055 | orchestrator | 2026-02-02 01:15:46 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:49.801220 | orchestrator | 2026-02-02 01:15:49 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:49.802366 | orchestrator | 2026-02-02 01:15:49 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:49.802438 | orchestrator | 2026-02-02 01:15:49 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:52.837437 | orchestrator | 2026-02-02 01:15:52 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:52.838620 | orchestrator | 2026-02-02 01:15:52 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:52.839690 | orchestrator | 2026-02-02 01:15:52 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:55.894625 | orchestrator | 2026-02-02 01:15:55 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:55.896707 | orchestrator | 2026-02-02 01:15:55 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:55.896763 | orchestrator | 2026-02-02 01:15:55 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:15:58.933975 | orchestrator | 2026-02-02 01:15:58 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:15:58.940977 | orchestrator | 2026-02-02 01:15:58 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:15:58.941113 | orchestrator | 2026-02-02 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:01.968105 | orchestrator | 2026-02-02 01:16:01 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:01.969540 | orchestrator | 2026-02-02 01:16:01 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:01.969612 | orchestrator | 2026-02-02 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:05.013415 | orchestrator | 2026-02-02 01:16:05 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:05.013473 | orchestrator | 2026-02-02 01:16:05 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:05.013714 | orchestrator | 2026-02-02 01:16:05 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:08.062806 | orchestrator | 2026-02-02 01:16:08 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:08.064652 | orchestrator | 2026-02-02 01:16:08 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:08.064692 | orchestrator | 2026-02-02 01:16:08 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:11.114854 | orchestrator | 2026-02-02 01:16:11 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:11.114929 | orchestrator | 2026-02-02 01:16:11 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:11.114937 | orchestrator | 2026-02-02 01:16:11 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:14.166149 | orchestrator | 2026-02-02 01:16:14 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:14.168732 | orchestrator | 2026-02-02 01:16:14 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:14.168811 | orchestrator | 2026-02-02 01:16:14 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:17.211690 | orchestrator | 2026-02-02 01:16:17 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:17.212760 | orchestrator | 2026-02-02 01:16:17 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:17.213856 | orchestrator | 2026-02-02 01:16:17 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:20.257575 | orchestrator | 2026-02-02 01:16:20 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:20.257711 | orchestrator | 2026-02-02 01:16:20 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:20.257726 | orchestrator | 2026-02-02 01:16:20 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:23.302345 | orchestrator | 2026-02-02 01:16:23 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:23.303326 | orchestrator | 2026-02-02 01:16:23 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:23.303375 | orchestrator | 2026-02-02 01:16:23 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:26.348638 | orchestrator | 2026-02-02 01:16:26 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:26.350629 | orchestrator | 2026-02-02 01:16:26 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:26.350694 | orchestrator | 2026-02-02 01:16:26 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:29.385934 | orchestrator | 2026-02-02 01:16:29 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:29.388679 | orchestrator | 2026-02-02 01:16:29 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:29.388748 | orchestrator | 2026-02-02 01:16:29 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:32.434005 | orchestrator | 2026-02-02 01:16:32 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:32.434655 | orchestrator | 2026-02-02 01:16:32 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:32.435029 | orchestrator | 2026-02-02 01:16:32 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:35.476310 | orchestrator | 2026-02-02 01:16:35 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:35.477215 | orchestrator | 2026-02-02 01:16:35 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:35.477273 | orchestrator | 2026-02-02 01:16:35 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:38.521920 | orchestrator | 2026-02-02 01:16:38 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:38.523674 | orchestrator | 2026-02-02 01:16:38 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:38.523752 | orchestrator | 2026-02-02 01:16:38 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:41.577570 | orchestrator | 2026-02-02 01:16:41 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:41.579388 | orchestrator | 2026-02-02 01:16:41 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:41.579441 | orchestrator | 2026-02-02 01:16:41 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:44.622297 | orchestrator | 2026-02-02 01:16:44 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:44.624495 | orchestrator | 2026-02-02 01:16:44 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:44.624574 | orchestrator | 2026-02-02 01:16:44 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:47.669788 | orchestrator | 2026-02-02 01:16:47 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:47.671348 | orchestrator | 2026-02-02 01:16:47 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:47.671370 | orchestrator | 2026-02-02 01:16:47 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:50.717241 | orchestrator | 2026-02-02 01:16:50 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:50.719226 | orchestrator | 2026-02-02 01:16:50 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:50.719343 | orchestrator | 2026-02-02 01:16:50 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:53.756068 | orchestrator | 2026-02-02 01:16:53 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:53.756761 | orchestrator | 2026-02-02 01:16:53 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:53.756792 | orchestrator | 2026-02-02 01:16:53 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:56.796486 | orchestrator | 2026-02-02 01:16:56 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:56.798676 | orchestrator | 2026-02-02 01:16:56 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:56.798734 | orchestrator | 2026-02-02 01:16:56 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:16:59.846734 | orchestrator | 2026-02-02 01:16:59 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:16:59.847404 | orchestrator | 2026-02-02 01:16:59 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:16:59.847457 | orchestrator | 2026-02-02 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:02.897485 | orchestrator | 2026-02-02 01:17:02 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:02.899292 | orchestrator | 2026-02-02 01:17:02 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:02.899365 | orchestrator | 2026-02-02 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:05.948121 | orchestrator | 2026-02-02 01:17:05 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:05.948459 | orchestrator | 2026-02-02 01:17:05 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:05.948679 | orchestrator | 2026-02-02 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:08.997569 | orchestrator | 2026-02-02 01:17:08 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:08.999035 | orchestrator | 2026-02-02 01:17:09 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:08.999077 | orchestrator | 2026-02-02 01:17:09 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:12.043588 | orchestrator | 2026-02-02 01:17:12 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:12.046312 | orchestrator | 2026-02-02 01:17:12 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:12.046389 | orchestrator | 2026-02-02 01:17:12 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:15.097855 | orchestrator | 2026-02-02 01:17:15 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:15.098347 | orchestrator | 2026-02-02 01:17:15 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:15.098383 | orchestrator | 2026-02-02 01:17:15 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:18.128617 | orchestrator | 2026-02-02 01:17:18 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:18.128935 | orchestrator | 2026-02-02 01:17:18 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:18.128973 | orchestrator | 2026-02-02 01:17:18 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:21.171707 | orchestrator | 2026-02-02 01:17:21 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:21.171954 | orchestrator | 2026-02-02 01:17:21 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:21.171981 | orchestrator | 2026-02-02 01:17:21 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:24.208736 | orchestrator | 2026-02-02 01:17:24 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:24.208972 | orchestrator | 2026-02-02 01:17:24 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:24.208998 | orchestrator | 2026-02-02 01:17:24 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:27.238914 | orchestrator | 2026-02-02 01:17:27 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:27.242477 | orchestrator | 2026-02-02 01:17:27 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:27.242562 | orchestrator | 2026-02-02 01:17:27 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:30.276711 | orchestrator | 2026-02-02 01:17:30 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:30.278326 | orchestrator | 2026-02-02 01:17:30 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:30.278565 | orchestrator | 2026-02-02 01:17:30 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:33.325831 | orchestrator | 2026-02-02 01:17:33 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:33.326206 | orchestrator | 2026-02-02 01:17:33 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:33.326240 | orchestrator | 2026-02-02 01:17:33 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:36.372777 | orchestrator | 2026-02-02 01:17:36 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:36.374305 | orchestrator | 2026-02-02 01:17:36 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:36.374342 | orchestrator | 2026-02-02 01:17:36 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:39.420590 | orchestrator | 2026-02-02 01:17:39 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:39.422264 | orchestrator | 2026-02-02 01:17:39 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:39.422321 | orchestrator | 2026-02-02 01:17:39 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:42.465981 | orchestrator | 2026-02-02 01:17:42 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:42.467518 | orchestrator | 2026-02-02 01:17:42 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:42.467572 | orchestrator | 2026-02-02 01:17:42 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:45.505106 | orchestrator | 2026-02-02 01:17:45 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:45.506981 | orchestrator | 2026-02-02 01:17:45 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:45.507021 | orchestrator | 2026-02-02 01:17:45 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:48.555225 | orchestrator | 2026-02-02 01:17:48 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:48.555712 | orchestrator | 2026-02-02 01:17:48 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:48.555748 | orchestrator | 2026-02-02 01:17:48 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:51.601837 | orchestrator | 2026-02-02 01:17:51 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:51.603189 | orchestrator | 2026-02-02 01:17:51 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state STARTED 2026-02-02 01:17:51.603234 | orchestrator | 2026-02-02 01:17:51 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:54.654608 | orchestrator | 2026-02-02 01:17:54 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:54.658362 | orchestrator | 2026-02-02 01:17:54 | INFO  | Task ae1c2ee7-b992-4620-9de9-78867f5bc951 is in state SUCCESS 2026-02-02 01:17:54.660441 | orchestrator | 2026-02-02 01:17:54.660687 | orchestrator | 2026-02-02 01:17:54.660752 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:17:54.660768 | orchestrator | 2026-02-02 01:17:54.660779 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:17:54.660873 | orchestrator | Monday 02 February 2026 01:09:28 +0000 (0:00:00.202) 0:00:00.202 ******* 2026-02-02 01:17:54.660886 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:17:54.660899 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:17:54.660910 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:17:54.660920 | orchestrator | 2026-02-02 01:17:54.660931 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:17:54.660942 | orchestrator | Monday 02 February 2026 01:09:29 +0000 (0:00:00.427) 0:00:00.629 ******* 2026-02-02 01:17:54.660953 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-02-02 01:17:54.660965 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-02-02 01:17:54.660975 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-02-02 01:17:54.660986 | orchestrator | 2026-02-02 01:17:54.660997 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-02-02 01:17:54.661010 | orchestrator | 2026-02-02 01:17:54.661023 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-02-02 01:17:54.661036 | orchestrator | Monday 02 February 2026 01:09:30 +0000 (0:00:00.888) 0:00:01.518 ******* 2026-02-02 01:17:54.661049 | orchestrator | 2026-02-02 01:17:54.661061 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-02-02 01:17:54.661074 | orchestrator | 2026-02-02 01:17:54.661101 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-02-02 01:17:54.661114 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:17:54.661127 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:17:54.661139 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:17:54.661153 | orchestrator | 2026-02-02 01:17:54.661165 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:17:54.661178 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:17:54.661193 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:17:54.661206 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:17:54.661219 | orchestrator | 2026-02-02 01:17:54.661232 | orchestrator | 2026-02-02 01:17:54.661245 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:17:54.661258 | orchestrator | Monday 02 February 2026 01:12:44 +0000 (0:03:13.902) 0:03:15.420 ******* 2026-02-02 01:17:54.661271 | orchestrator | =============================================================================== 2026-02-02 01:17:54.661283 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 193.90s 2026-02-02 01:17:54.661297 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2026-02-02 01:17:54.661309 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2026-02-02 01:17:54.661322 | orchestrator | 2026-02-02 01:17:54.661333 | orchestrator | 2026-02-02 01:17:54.661344 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:17:54.661354 | orchestrator | 2026-02-02 01:17:54.661365 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:17:54.661376 | orchestrator | Monday 02 February 2026 01:12:49 +0000 (0:00:00.350) 0:00:00.350 ******* 2026-02-02 01:17:54.661387 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:17:54.661397 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:17:54.661408 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:17:54.661419 | orchestrator | 2026-02-02 01:17:54.661431 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:17:54.661442 | orchestrator | Monday 02 February 2026 01:12:49 +0000 (0:00:00.334) 0:00:00.685 ******* 2026-02-02 01:17:54.661452 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-02 01:17:54.661463 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-02 01:17:54.661482 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-02 01:17:54.661549 | orchestrator | 2026-02-02 01:17:54.661563 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-02 01:17:54.661619 | orchestrator | 2026-02-02 01:17:54.661631 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-02 01:17:54.661700 | orchestrator | Monday 02 February 2026 01:12:50 +0000 (0:00:00.500) 0:00:01.186 ******* 2026-02-02 01:17:54.661713 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:17:54.661725 | orchestrator | 2026-02-02 01:17:54.661736 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting services] ************** 2026-02-02 01:17:54.661747 | orchestrator | Monday 02 February 2026 01:12:50 +0000 (0:00:00.609) 0:00:01.796 ******* 2026-02-02 01:17:54.661758 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-02 01:17:54.661769 | orchestrator | 2026-02-02 01:17:54.661780 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting endpoints] ************* 2026-02-02 01:17:54.661790 | orchestrator | Monday 02 February 2026 01:12:54 +0000 (0:00:03.702) 0:00:05.498 ******* 2026-02-02 01:17:54.661801 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-02 01:17:54.661812 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-02 01:17:54.661823 | orchestrator | 2026-02-02 01:17:54.661834 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-02 01:17:54.661845 | orchestrator | Monday 02 February 2026 01:13:01 +0000 (0:00:06.830) 0:00:12.328 ******* 2026-02-02 01:17:54.661856 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 01:17:54.661867 | orchestrator | 2026-02-02 01:17:54.661897 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-02 01:17:54.661909 | orchestrator | Monday 02 February 2026 01:13:04 +0000 (0:00:03.719) 0:00:16.048 ******* 2026-02-02 01:17:54.661919 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-02 01:17:54.661930 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-02 01:17:54.661941 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 01:17:54.661952 | orchestrator | 2026-02-02 01:17:54.661964 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-02 01:17:54.661975 | orchestrator | Monday 02 February 2026 01:13:13 +0000 (0:00:08.210) 0:00:24.258 ******* 2026-02-02 01:17:54.661986 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 01:17:54.661997 | orchestrator | 2026-02-02 01:17:54.662008 | orchestrator | TASK [service-ks-register : octavia | Granting/revoking user roles] ************ 2026-02-02 01:17:54.662072 | orchestrator | Monday 02 February 2026 01:13:16 +0000 (0:00:03.710) 0:00:27.968 ******* 2026-02-02 01:17:54.662083 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-02 01:17:54.662094 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-02 01:17:54.662105 | orchestrator | 2026-02-02 01:17:54.662116 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-02 01:17:54.662127 | orchestrator | Monday 02 February 2026 01:13:24 +0000 (0:00:07.769) 0:00:35.738 ******* 2026-02-02 01:17:54.662138 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-02 01:17:54.662155 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-02 01:17:54.662167 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-02 01:17:54.662177 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-02 01:17:54.662189 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-02 01:17:54.662199 | orchestrator | 2026-02-02 01:17:54.662210 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-02 01:17:54.662221 | orchestrator | Monday 02 February 2026 01:13:41 +0000 (0:00:16.371) 0:00:52.109 ******* 2026-02-02 01:17:54.662242 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:17:54.662253 | orchestrator | 2026-02-02 01:17:54.662264 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-02 01:17:54.662275 | orchestrator | Monday 02 February 2026 01:13:41 +0000 (0:00:00.637) 0:00:52.747 ******* 2026-02-02 01:17:54.662285 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.662296 | orchestrator | 2026-02-02 01:17:54.662307 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-02 01:17:54.662318 | orchestrator | Monday 02 February 2026 01:13:47 +0000 (0:00:05.716) 0:00:58.464 ******* 2026-02-02 01:17:54.662329 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.662339 | orchestrator | 2026-02-02 01:17:54.662350 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-02 01:17:54.662361 | orchestrator | Monday 02 February 2026 01:13:52 +0000 (0:00:05.181) 0:01:03.646 ******* 2026-02-02 01:17:54.662372 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:17:54.662383 | orchestrator | 2026-02-02 01:17:54.662393 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-02 01:17:54.662404 | orchestrator | Monday 02 February 2026 01:13:55 +0000 (0:00:03.399) 0:01:07.045 ******* 2026-02-02 01:17:54.662415 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-02 01:17:54.662426 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-02 01:17:54.662437 | orchestrator | 2026-02-02 01:17:54.662448 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-02 01:17:54.662458 | orchestrator | Monday 02 February 2026 01:14:06 +0000 (0:00:10.655) 0:01:17.701 ******* 2026-02-02 01:17:54.662469 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-02 01:17:54.662481 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-02 01:17:54.662518 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-02 01:17:54.662532 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-02 01:17:54.662543 | orchestrator | 2026-02-02 01:17:54.662554 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-02 01:17:54.662565 | orchestrator | Monday 02 February 2026 01:14:22 +0000 (0:00:16.337) 0:01:34.038 ******* 2026-02-02 01:17:54.662575 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.662586 | orchestrator | 2026-02-02 01:17:54.662597 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-02 01:17:54.662608 | orchestrator | Monday 02 February 2026 01:14:27 +0000 (0:00:04.769) 0:01:38.808 ******* 2026-02-02 01:17:54.662618 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.662630 | orchestrator | 2026-02-02 01:17:54.662641 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-02 01:17:54.662651 | orchestrator | Monday 02 February 2026 01:14:33 +0000 (0:00:05.905) 0:01:44.714 ******* 2026-02-02 01:17:54.662662 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:17:54.662673 | orchestrator | 2026-02-02 01:17:54.662684 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-02 01:17:54.662694 | orchestrator | Monday 02 February 2026 01:14:34 +0000 (0:00:00.441) 0:01:45.155 ******* 2026-02-02 01:17:54.662705 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:17:54.662716 | orchestrator | 2026-02-02 01:17:54.662735 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-02 01:17:54.662746 | orchestrator | Monday 02 February 2026 01:14:38 +0000 (0:00:04.855) 0:01:50.014 ******* 2026-02-02 01:17:54.662757 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:17:54.662776 | orchestrator | 2026-02-02 01:17:54.662787 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-02 01:17:54.662798 | orchestrator | Monday 02 February 2026 01:14:40 +0000 (0:00:01.745) 0:01:51.760 ******* 2026-02-02 01:17:54.662809 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:17:54.662820 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.662831 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:17:54.662842 | orchestrator | 2026-02-02 01:17:54.662853 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-02 01:17:54.662864 | orchestrator | Monday 02 February 2026 01:14:46 +0000 (0:00:05.685) 0:01:57.446 ******* 2026-02-02 01:17:54.662874 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:17:54.662885 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.662896 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:17:54.662907 | orchestrator | 2026-02-02 01:17:54.662918 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-02 01:17:54.662929 | orchestrator | Monday 02 February 2026 01:14:51 +0000 (0:00:04.916) 0:02:02.363 ******* 2026-02-02 01:17:54.662939 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.662950 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:17:54.662961 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:17:54.662972 | orchestrator | 2026-02-02 01:17:54.662988 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-02 01:17:54.662999 | orchestrator | Monday 02 February 2026 01:14:52 +0000 (0:00:01.079) 0:02:03.442 ******* 2026-02-02 01:17:54.663010 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:17:54.663021 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:17:54.663033 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:17:54.663044 | orchestrator | 2026-02-02 01:17:54.663054 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-02 01:17:54.663065 | orchestrator | Monday 02 February 2026 01:14:55 +0000 (0:00:02.698) 0:02:06.140 ******* 2026-02-02 01:17:54.663076 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.663087 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:17:54.663098 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:17:54.663109 | orchestrator | 2026-02-02 01:17:54.663120 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-02 01:17:54.663130 | orchestrator | Monday 02 February 2026 01:14:56 +0000 (0:00:01.593) 0:02:07.734 ******* 2026-02-02 01:17:54.663141 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:17:54.663152 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:17:54.663163 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.663173 | orchestrator | 2026-02-02 01:17:54.663184 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-02 01:17:54.663195 | orchestrator | Monday 02 February 2026 01:14:57 +0000 (0:00:01.063) 0:02:08.797 ******* 2026-02-02 01:17:54.663206 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:17:54.663217 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:17:54.663227 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.663238 | orchestrator | 2026-02-02 01:17:54.663250 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-02 01:17:54.663290 | orchestrator | Monday 02 February 2026 01:14:59 +0000 (0:00:02.173) 0:02:10.970 ******* 2026-02-02 01:17:54.663301 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:17:54.663311 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:17:54.663322 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.663333 | orchestrator | 2026-02-02 01:17:54.663344 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-02 01:17:54.663355 | orchestrator | Monday 02 February 2026 01:15:01 +0000 (0:00:02.038) 0:02:13.009 ******* 2026-02-02 01:17:54.663365 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:17:54.663376 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:17:54.663387 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:17:54.663398 | orchestrator | 2026-02-02 01:17:54.663415 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-02 01:17:54.663426 | orchestrator | Monday 02 February 2026 01:15:02 +0000 (0:00:00.845) 0:02:13.854 ******* 2026-02-02 01:17:54.663437 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:17:54.663448 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:17:54.663459 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:17:54.663470 | orchestrator | 2026-02-02 01:17:54.663481 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-02 01:17:54.663518 | orchestrator | Monday 02 February 2026 01:15:06 +0000 (0:00:03.944) 0:02:17.799 ******* 2026-02-02 01:17:54.663537 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:17:54.663556 | orchestrator | 2026-02-02 01:17:54.663575 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-02 01:17:54.663594 | orchestrator | Monday 02 February 2026 01:15:07 +0000 (0:00:00.822) 0:02:18.621 ******* 2026-02-02 01:17:54.663610 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:17:54.663621 | orchestrator | 2026-02-02 01:17:54.663632 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-02 01:17:54.663643 | orchestrator | Monday 02 February 2026 01:15:11 +0000 (0:00:03.821) 0:02:22.442 ******* 2026-02-02 01:17:54.663654 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:17:54.663664 | orchestrator | 2026-02-02 01:17:54.663675 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-02 01:17:54.663686 | orchestrator | Monday 02 February 2026 01:15:14 +0000 (0:00:03.473) 0:02:25.916 ******* 2026-02-02 01:17:54.663697 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-02 01:17:54.663707 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-02 01:17:54.663718 | orchestrator | 2026-02-02 01:17:54.663729 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-02 01:17:54.663740 | orchestrator | Monday 02 February 2026 01:15:22 +0000 (0:00:07.473) 0:02:33.390 ******* 2026-02-02 01:17:54.663751 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:17:54.663762 | orchestrator | 2026-02-02 01:17:54.663791 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-02 01:17:54.663802 | orchestrator | Monday 02 February 2026 01:15:26 +0000 (0:00:03.885) 0:02:37.276 ******* 2026-02-02 01:17:54.663814 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:17:54.663825 | orchestrator | ok: [testbed-node-1] 2026-02-02 01:17:54.663836 | orchestrator | ok: [testbed-node-2] 2026-02-02 01:17:54.663847 | orchestrator | 2026-02-02 01:17:54.663858 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-02 01:17:54.663868 | orchestrator | Monday 02 February 2026 01:15:26 +0000 (0:00:00.366) 0:02:37.642 ******* 2026-02-02 01:17:54.663890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.663908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.663929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.663942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.663963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.663976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.663992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664127 | orchestrator | 2026-02-02 01:17:54.664139 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-02 01:17:54.664150 | orchestrator | Monday 02 February 2026 01:15:29 +0000 (0:00:02.613) 0:02:40.255 ******* 2026-02-02 01:17:54.664161 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:17:54.664172 | orchestrator | 2026-02-02 01:17:54.664183 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-02 01:17:54.664194 | orchestrator | Monday 02 February 2026 01:15:29 +0000 (0:00:00.140) 0:02:40.396 ******* 2026-02-02 01:17:54.664205 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:17:54.664216 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:17:54.664227 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:17:54.664237 | orchestrator | 2026-02-02 01:17:54.664248 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-02 01:17:54.664260 | orchestrator | Monday 02 February 2026 01:15:29 +0000 (0:00:00.545) 0:02:40.941 ******* 2026-02-02 01:17:54.664271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 01:17:54.664290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 01:17:54.664302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.664333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.664345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:17:54.664356 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:17:54.664368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 01:17:54.664379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 01:17:54.664398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.664411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.664434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:17:54.664445 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:17:54.664457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 01:17:54.664468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 01:17:54.664480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.664521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.664535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:17:54.664553 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:17:54.664564 | orchestrator | 2026-02-02 01:17:54.664576 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-02 01:17:54.664587 | orchestrator | Monday 02 February 2026 01:15:30 +0000 (0:00:00.759) 0:02:41.701 ******* 2026-02-02 01:17:54.664598 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:17:54.664609 | orchestrator | 2026-02-02 01:17:54.664620 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-02 01:17:54.664636 | orchestrator | Monday 02 February 2026 01:15:31 +0000 (0:00:00.595) 0:02:42.297 ******* 2026-02-02 01:17:54.664647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.664659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.664671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.664691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.664709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.664726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.664738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.664858 | orchestrator | 2026-02-02 01:17:54.664869 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-02 01:17:54.664880 | orchestrator | Monday 02 February 2026 01:15:36 +0000 (0:00:05.111) 0:02:47.408 ******* 2026-02-02 01:17:54.664892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 01:17:54.664916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 01:17:54.664933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.664944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.664956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:17:54.664967 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:17:54.664978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 01:17:54.664990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 01:17:54.665014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.665026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.665042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:17:54.665053 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:17:54.665065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 01:17:54.665076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 01:17:54.665161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.665193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.665205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:17:54.665249 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:17:54.665261 | orchestrator | 2026-02-02 01:17:54.665273 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-02 01:17:54.665293 | orchestrator | Monday 02 February 2026 01:15:37 +0000 (0:00:00.811) 0:02:48.220 ******* 2026-02-02 01:17:54.665305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 01:17:54.665317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 01:17:54.665329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.665347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.665367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:17:54.665378 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:17:54.665395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 01:17:54.665407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 01:17:54.665418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.665429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.665447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:17:54.665577 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:17:54.665600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 01:17:54.665619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 01:17:54.665630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.665642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.665653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:17:54.665674 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:17:54.665685 | orchestrator | 2026-02-02 01:17:54.665696 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-02 01:17:54.665707 | orchestrator | Monday 02 February 2026 01:15:37 +0000 (0:00:00.854) 0:02:49.075 ******* 2026-02-02 01:17:54.665726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.665738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.665756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.665768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.665786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.665798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.665815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.665827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.665844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.665855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.665867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.665884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.665896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.665929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.665942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.665964 | orchestrator | 2026-02-02 01:17:54.665976 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-02 01:17:54.665992 | orchestrator | Monday 02 February 2026 01:15:43 +0000 (0:00:05.370) 0:02:54.445 ******* 2026-02-02 01:17:54.666004 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-02 01:17:54.666060 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-02 01:17:54.666075 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-02 01:17:54.666086 | orchestrator | 2026-02-02 01:17:54.666096 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-02 01:17:54.666106 | orchestrator | Monday 02 February 2026 01:15:45 +0000 (0:00:02.206) 0:02:56.651 ******* 2026-02-02 01:17:54.666116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.666133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.666313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.666386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.666414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.666425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.666456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.666467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.666476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.666526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.666538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.666552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.666576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.666586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.666595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.666605 | orchestrator | 2026-02-02 01:17:54.666615 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-02 01:17:54.666625 | orchestrator | Monday 02 February 2026 01:16:05 +0000 (0:00:20.013) 0:03:16.665 ******* 2026-02-02 01:17:54.666635 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.666644 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:17:54.666653 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:17:54.666662 | orchestrator | 2026-02-02 01:17:54.666672 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-02 01:17:54.666681 | orchestrator | Monday 02 February 2026 01:16:07 +0000 (0:00:01.558) 0:03:18.224 ******* 2026-02-02 01:17:54.666690 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-02 01:17:54.666699 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-02 01:17:54.666708 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-02 01:17:54.666717 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-02 01:17:54.666732 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-02 01:17:54.666741 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-02 01:17:54.666750 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-02 01:17:54.666759 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-02 01:17:54.666768 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-02 01:17:54.666777 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-02 01:17:54.666785 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-02 01:17:54.666794 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-02 01:17:54.666803 | orchestrator | 2026-02-02 01:17:54.666812 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-02 01:17:54.666821 | orchestrator | Monday 02 February 2026 01:16:12 +0000 (0:00:04.929) 0:03:23.154 ******* 2026-02-02 01:17:54.666836 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-02 01:17:54.666845 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-02 01:17:54.666854 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-02 01:17:54.666862 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-02 01:17:54.666871 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-02 01:17:54.666884 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-02 01:17:54.666894 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-02 01:17:54.666903 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-02 01:17:54.666912 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-02 01:17:54.666921 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-02 01:17:54.666930 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-02 01:17:54.666939 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-02 01:17:54.666947 | orchestrator | 2026-02-02 01:17:54.666956 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-02 01:17:54.666965 | orchestrator | Monday 02 February 2026 01:16:17 +0000 (0:00:05.630) 0:03:28.784 ******* 2026-02-02 01:17:54.666974 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-02 01:17:54.666983 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-02 01:17:54.666991 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-02 01:17:54.667000 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-02 01:17:54.667009 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-02 01:17:54.667018 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-02 01:17:54.667026 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-02 01:17:54.667035 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-02 01:17:54.667044 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-02 01:17:54.667053 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-02 01:17:54.667061 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-02 01:17:54.667070 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-02 01:17:54.667079 | orchestrator | 2026-02-02 01:17:54.667087 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-02-02 01:17:54.667096 | orchestrator | Monday 02 February 2026 01:16:22 +0000 (0:00:05.214) 0:03:33.999 ******* 2026-02-02 01:17:54.667106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.667123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.667143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 01:17:54.667153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.667163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.667172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 01:17:54.667182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.667196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.667211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.667235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.667245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.667254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 01:17:54.667264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.667273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.667294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 01:17:54.667303 | orchestrator | 2026-02-02 01:17:54.667313 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-02-02 01:17:54.667322 | orchestrator | Monday 02 February 2026 01:16:27 +0000 (0:00:04.229) 0:03:38.228 ******* 2026-02-02 01:17:54.667332 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:17:54.667341 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:17:54.667350 | orchestrator | } 2026-02-02 01:17:54.667359 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:17:54.667368 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:17:54.667377 | orchestrator | } 2026-02-02 01:17:54.667386 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:17:54.667395 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:17:54.667404 | orchestrator | } 2026-02-02 01:17:54.667413 | orchestrator | 2026-02-02 01:17:54.667422 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:17:54.667431 | orchestrator | Monday 02 February 2026 01:16:27 +0000 (0:00:00.370) 0:03:38.599 ******* 2026-02-02 01:17:54.667445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 01:17:54.667455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 01:17:54.667465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.667480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.667543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:17:54.667555 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:17:54.667570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 01:17:54.667580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 01:17:54.667589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.667599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.667614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:17:54.667623 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:17:54.667639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 01:17:54.667649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 01:17:54.667662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.667672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 01:17:54.667682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 01:17:54.667696 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:17:54.667705 | orchestrator | 2026-02-02 01:17:54.667714 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-02 01:17:54.667723 | orchestrator | Monday 02 February 2026 01:16:28 +0000 (0:00:01.463) 0:03:40.062 ******* 2026-02-02 01:17:54.667732 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:17:54.667741 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:17:54.667750 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:17:54.667759 | orchestrator | 2026-02-02 01:17:54.667768 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-02 01:17:54.667777 | orchestrator | Monday 02 February 2026 01:16:29 +0000 (0:00:00.469) 0:03:40.532 ******* 2026-02-02 01:17:54.667786 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.667795 | orchestrator | 2026-02-02 01:17:54.667804 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-02 01:17:54.667813 | orchestrator | Monday 02 February 2026 01:16:31 +0000 (0:00:02.414) 0:03:42.946 ******* 2026-02-02 01:17:54.667822 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.667832 | orchestrator | 2026-02-02 01:17:54.667841 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-02 01:17:54.667850 | orchestrator | Monday 02 February 2026 01:16:34 +0000 (0:00:02.405) 0:03:45.352 ******* 2026-02-02 01:17:54.667859 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.667868 | orchestrator | 2026-02-02 01:17:54.667877 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-02 01:17:54.667886 | orchestrator | Monday 02 February 2026 01:16:36 +0000 (0:00:02.493) 0:03:47.845 ******* 2026-02-02 01:17:54.667895 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.667904 | orchestrator | 2026-02-02 01:17:54.667913 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-02 01:17:54.667927 | orchestrator | Monday 02 February 2026 01:16:39 +0000 (0:00:02.529) 0:03:50.375 ******* 2026-02-02 01:17:54.667936 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.667945 | orchestrator | 2026-02-02 01:17:54.667954 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-02 01:17:54.667963 | orchestrator | Monday 02 February 2026 01:17:04 +0000 (0:00:24.840) 0:04:15.216 ******* 2026-02-02 01:17:54.667972 | orchestrator | 2026-02-02 01:17:54.667981 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-02 01:17:54.667990 | orchestrator | Monday 02 February 2026 01:17:04 +0000 (0:00:00.074) 0:04:15.290 ******* 2026-02-02 01:17:54.667999 | orchestrator | 2026-02-02 01:17:54.668008 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-02 01:17:54.668017 | orchestrator | Monday 02 February 2026 01:17:04 +0000 (0:00:00.073) 0:04:15.364 ******* 2026-02-02 01:17:54.668025 | orchestrator | 2026-02-02 01:17:54.668034 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-02 01:17:54.668043 | orchestrator | Monday 02 February 2026 01:17:04 +0000 (0:00:00.319) 0:04:15.684 ******* 2026-02-02 01:17:54.668052 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.668061 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:17:54.668070 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:17:54.668079 | orchestrator | 2026-02-02 01:17:54.668088 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-02 01:17:54.668097 | orchestrator | Monday 02 February 2026 01:17:18 +0000 (0:00:14.304) 0:04:29.988 ******* 2026-02-02 01:17:54.668106 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.668115 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:17:54.668129 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:17:54.668138 | orchestrator | 2026-02-02 01:17:54.668153 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-02 01:17:54.668162 | orchestrator | Monday 02 February 2026 01:17:31 +0000 (0:00:12.725) 0:04:42.713 ******* 2026-02-02 01:17:54.668172 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.668180 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:17:54.668190 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:17:54.668199 | orchestrator | 2026-02-02 01:17:54.668207 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-02 01:17:54.668217 | orchestrator | Monday 02 February 2026 01:17:36 +0000 (0:00:05.202) 0:04:47.916 ******* 2026-02-02 01:17:54.668225 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.668234 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:17:54.668243 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:17:54.668252 | orchestrator | 2026-02-02 01:17:54.668261 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-02 01:17:54.668271 | orchestrator | Monday 02 February 2026 01:17:42 +0000 (0:00:05.681) 0:04:53.598 ******* 2026-02-02 01:17:54.668280 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:17:54.668289 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:17:54.668298 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:17:54.668307 | orchestrator | 2026-02-02 01:17:54.668316 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:17:54.668325 | orchestrator | testbed-node-0 : ok=58  changed=39  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-02 01:17:54.668335 | orchestrator | testbed-node-1 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 01:17:54.668345 | orchestrator | testbed-node-2 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 01:17:54.668354 | orchestrator | 2026-02-02 01:17:54.668363 | orchestrator | 2026-02-02 01:17:54.668372 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:17:54.668381 | orchestrator | Monday 02 February 2026 01:17:52 +0000 (0:00:10.218) 0:05:03.816 ******* 2026-02-02 01:17:54.668390 | orchestrator | =============================================================================== 2026-02-02 01:17:54.668399 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 24.84s 2026-02-02 01:17:54.668408 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 20.01s 2026-02-02 01:17:54.668417 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.37s 2026-02-02 01:17:54.668426 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.34s 2026-02-02 01:17:54.668435 | orchestrator | octavia : Restart octavia-api container -------------------------------- 14.30s 2026-02-02 01:17:54.668444 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 12.73s 2026-02-02 01:17:54.668453 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.66s 2026-02-02 01:17:54.668462 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.22s 2026-02-02 01:17:54.668471 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.21s 2026-02-02 01:17:54.668480 | orchestrator | service-ks-register : octavia | Granting/revoking user roles ------------ 7.77s 2026-02-02 01:17:54.668489 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.47s 2026-02-02 01:17:54.668516 | orchestrator | service-ks-register : octavia | Creating/deleting endpoints ------------- 6.83s 2026-02-02 01:17:54.668526 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.91s 2026-02-02 01:17:54.668535 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.72s 2026-02-02 01:17:54.668544 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.69s 2026-02-02 01:17:54.668553 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.68s 2026-02-02 01:17:54.668567 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.63s 2026-02-02 01:17:54.668581 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.37s 2026-02-02 01:17:54.668591 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.21s 2026-02-02 01:17:54.668600 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.20s 2026-02-02 01:17:54.668609 | orchestrator | 2026-02-02 01:17:54 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:17:57.700986 | orchestrator | 2026-02-02 01:17:57 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:17:57.701090 | orchestrator | 2026-02-02 01:17:57 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:18:00.741368 | orchestrator | 2026-02-02 01:18:00 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:18:00.741446 | orchestrator | 2026-02-02 01:18:00 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:18:03.797438 | orchestrator | 2026-02-02 01:18:03 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state STARTED 2026-02-02 01:18:03.797525 | orchestrator | 2026-02-02 01:18:03 | INFO  | Wait 1 second(s) until the next check 2026-02-02 01:18:06.841876 | orchestrator | 2026-02-02 01:18:06 | INFO  | Task f69b75bb-b2b0-41d0-bf9c-9912092a445c is in state SUCCESS 2026-02-02 01:18:06.844910 | orchestrator | 2026-02-02 01:18:06.844988 | orchestrator | 2026-02-02 01:18:06.845003 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 01:18:06.845016 | orchestrator | 2026-02-02 01:18:06.845028 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-02 01:18:06.845040 | orchestrator | Monday 02 February 2026 01:08:11 +0000 (0:00:00.312) 0:00:00.312 ******* 2026-02-02 01:18:06.845052 | orchestrator | changed: [testbed-manager] 2026-02-02 01:18:06.845065 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.845076 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:18:06.845086 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:18:06.845096 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:18:06.845106 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:18:06.845116 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:18:06.845125 | orchestrator | 2026-02-02 01:18:06.845135 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 01:18:06.845145 | orchestrator | Monday 02 February 2026 01:08:12 +0000 (0:00:01.070) 0:00:01.382 ******* 2026-02-02 01:18:06.845155 | orchestrator | changed: [testbed-manager] 2026-02-02 01:18:06.845165 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.845175 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:18:06.845185 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:18:06.845194 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:18:06.845204 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:18:06.845214 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:18:06.845224 | orchestrator | 2026-02-02 01:18:06.845234 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 01:18:06.845244 | orchestrator | Monday 02 February 2026 01:08:12 +0000 (0:00:00.799) 0:00:02.182 ******* 2026-02-02 01:18:06.845254 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-02 01:18:06.845264 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-02 01:18:06.845274 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-02 01:18:06.845284 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-02 01:18:06.845293 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-02 01:18:06.845303 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-02 01:18:06.845313 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-02 01:18:06.845344 | orchestrator | 2026-02-02 01:18:06.845354 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-02 01:18:06.845367 | orchestrator | 2026-02-02 01:18:06.845384 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-02 01:18:06.845400 | orchestrator | Monday 02 February 2026 01:08:14 +0000 (0:00:01.346) 0:00:03.529 ******* 2026-02-02 01:18:06.845417 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:18:06.845433 | orchestrator | 2026-02-02 01:18:06.845448 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-02 01:18:06.845463 | orchestrator | Monday 02 February 2026 01:08:15 +0000 (0:00:01.300) 0:00:04.829 ******* 2026-02-02 01:18:06.845480 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-02 01:18:06.845497 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-02 01:18:06.845515 | orchestrator | 2026-02-02 01:18:06.845534 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-02 01:18:06.845577 | orchestrator | Monday 02 February 2026 01:08:19 +0000 (0:00:04.226) 0:00:09.055 ******* 2026-02-02 01:18:06.845591 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 01:18:06.845603 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 01:18:06.845614 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.845625 | orchestrator | 2026-02-02 01:18:06.845636 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-02 01:18:06.845647 | orchestrator | Monday 02 February 2026 01:08:23 +0000 (0:00:03.785) 0:00:12.841 ******* 2026-02-02 01:18:06.845659 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.845670 | orchestrator | 2026-02-02 01:18:06.845681 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-02 01:18:06.845693 | orchestrator | Monday 02 February 2026 01:08:24 +0000 (0:00:00.984) 0:00:13.826 ******* 2026-02-02 01:18:06.845705 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.845715 | orchestrator | 2026-02-02 01:18:06.845725 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-02 01:18:06.845735 | orchestrator | Monday 02 February 2026 01:08:26 +0000 (0:00:01.745) 0:00:15.572 ******* 2026-02-02 01:18:06.845744 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.845754 | orchestrator | 2026-02-02 01:18:06.845764 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-02 01:18:06.845773 | orchestrator | Monday 02 February 2026 01:08:31 +0000 (0:00:05.568) 0:00:21.140 ******* 2026-02-02 01:18:06.845783 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.845793 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.845802 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.845812 | orchestrator | 2026-02-02 01:18:06.845822 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-02 01:18:06.845832 | orchestrator | Monday 02 February 2026 01:08:33 +0000 (0:00:01.751) 0:00:22.892 ******* 2026-02-02 01:18:06.845841 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:18:06.845851 | orchestrator | 2026-02-02 01:18:06.845861 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-02 01:18:06.845871 | orchestrator | Monday 02 February 2026 01:09:09 +0000 (0:00:35.506) 0:00:58.399 ******* 2026-02-02 01:18:06.845880 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.845896 | orchestrator | 2026-02-02 01:18:06.845913 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-02 01:18:06.845930 | orchestrator | Monday 02 February 2026 01:09:23 +0000 (0:00:14.223) 0:01:12.622 ******* 2026-02-02 01:18:06.845946 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:18:06.845961 | orchestrator | 2026-02-02 01:18:06.845977 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-02 01:18:06.845993 | orchestrator | Monday 02 February 2026 01:09:36 +0000 (0:00:13.387) 0:01:26.009 ******* 2026-02-02 01:18:06.846097 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:18:06.846124 | orchestrator | 2026-02-02 01:18:06.846139 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-02 01:18:06.846157 | orchestrator | Monday 02 February 2026 01:09:38 +0000 (0:00:01.345) 0:01:27.355 ******* 2026-02-02 01:18:06.846167 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.846177 | orchestrator | 2026-02-02 01:18:06.846187 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-02 01:18:06.846197 | orchestrator | Monday 02 February 2026 01:09:38 +0000 (0:00:00.649) 0:01:28.005 ******* 2026-02-02 01:18:06.846207 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:18:06.846217 | orchestrator | 2026-02-02 01:18:06.846227 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-02 01:18:06.846236 | orchestrator | Monday 02 February 2026 01:09:39 +0000 (0:00:00.541) 0:01:28.546 ******* 2026-02-02 01:18:06.846246 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:18:06.846256 | orchestrator | 2026-02-02 01:18:06.846266 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-02 01:18:06.846276 | orchestrator | Monday 02 February 2026 01:09:58 +0000 (0:00:19.699) 0:01:48.246 ******* 2026-02-02 01:18:06.846286 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.846295 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.846305 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.846315 | orchestrator | 2026-02-02 01:18:06.846324 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-02 01:18:06.846334 | orchestrator | 2026-02-02 01:18:06.846344 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-02 01:18:06.846354 | orchestrator | Monday 02 February 2026 01:09:59 +0000 (0:00:00.420) 0:01:48.667 ******* 2026-02-02 01:18:06.846363 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:18:06.846373 | orchestrator | 2026-02-02 01:18:06.846383 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-02 01:18:06.846393 | orchestrator | Monday 02 February 2026 01:09:59 +0000 (0:00:00.578) 0:01:49.245 ******* 2026-02-02 01:18:06.846403 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.846412 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.846422 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.846432 | orchestrator | 2026-02-02 01:18:06.846442 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-02 01:18:06.846451 | orchestrator | Monday 02 February 2026 01:10:02 +0000 (0:00:02.192) 0:01:51.438 ******* 2026-02-02 01:18:06.846461 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.846471 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.846480 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.846490 | orchestrator | 2026-02-02 01:18:06.846500 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-02 01:18:06.846513 | orchestrator | Monday 02 February 2026 01:10:04 +0000 (0:00:02.232) 0:01:53.671 ******* 2026-02-02 01:18:06.846530 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.846546 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.846600 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.846616 | orchestrator | 2026-02-02 01:18:06.846631 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-02 01:18:06.846648 | orchestrator | Monday 02 February 2026 01:10:04 +0000 (0:00:00.375) 0:01:54.046 ******* 2026-02-02 01:18:06.846664 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-02 01:18:06.846681 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.846696 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-02 01:18:06.846712 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.846729 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-02 01:18:06.846746 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-02 01:18:06.846761 | orchestrator | 2026-02-02 01:18:06.846775 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-02 01:18:06.846804 | orchestrator | Monday 02 February 2026 01:10:17 +0000 (0:00:12.780) 0:02:06.826 ******* 2026-02-02 01:18:06.846821 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.846837 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.846854 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.846870 | orchestrator | 2026-02-02 01:18:06.846886 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-02 01:18:06.846903 | orchestrator | Monday 02 February 2026 01:10:18 +0000 (0:00:00.625) 0:02:07.451 ******* 2026-02-02 01:18:06.846919 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-02 01:18:06.846935 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.846946 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-02 01:18:06.846955 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.846965 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-02 01:18:06.846975 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.846984 | orchestrator | 2026-02-02 01:18:06.846994 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-02 01:18:06.847004 | orchestrator | Monday 02 February 2026 01:10:19 +0000 (0:00:01.219) 0:02:08.670 ******* 2026-02-02 01:18:06.847014 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.847024 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.847034 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.847043 | orchestrator | 2026-02-02 01:18:06.847053 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-02 01:18:06.847063 | orchestrator | Monday 02 February 2026 01:10:20 +0000 (0:00:01.441) 0:02:10.111 ******* 2026-02-02 01:18:06.847072 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.847082 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.847092 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.847102 | orchestrator | 2026-02-02 01:18:06.847112 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-02 01:18:06.847122 | orchestrator | Monday 02 February 2026 01:10:21 +0000 (0:00:01.003) 0:02:11.115 ******* 2026-02-02 01:18:06.847132 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.847142 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.847161 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.847172 | orchestrator | 2026-02-02 01:18:06.847186 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-02 01:18:06.847202 | orchestrator | Monday 02 February 2026 01:10:23 +0000 (0:00:02.116) 0:02:13.232 ******* 2026-02-02 01:18:06.847218 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.847234 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.847250 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:18:06.847267 | orchestrator | 2026-02-02 01:18:06.847283 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-02 01:18:06.847301 | orchestrator | Monday 02 February 2026 01:10:46 +0000 (0:00:22.830) 0:02:36.062 ******* 2026-02-02 01:18:06.847317 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.847334 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.847345 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:18:06.847355 | orchestrator | 2026-02-02 01:18:06.847364 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-02 01:18:06.847374 | orchestrator | Monday 02 February 2026 01:10:59 +0000 (0:00:12.867) 0:02:48.930 ******* 2026-02-02 01:18:06.847384 | orchestrator | ok: [testbed-node-0] 2026-02-02 01:18:06.847394 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.847404 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.847413 | orchestrator | 2026-02-02 01:18:06.847423 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-02 01:18:06.847434 | orchestrator | Monday 02 February 2026 01:11:00 +0000 (0:00:01.009) 0:02:49.939 ******* 2026-02-02 01:18:06.847443 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.847453 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.847471 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.847481 | orchestrator | 2026-02-02 01:18:06.847491 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-02 01:18:06.847501 | orchestrator | Monday 02 February 2026 01:11:13 +0000 (0:00:13.001) 0:03:02.941 ******* 2026-02-02 01:18:06.847535 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.847544 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.847703 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.847723 | orchestrator | 2026-02-02 01:18:06.847734 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-02 01:18:06.847780 | orchestrator | Monday 02 February 2026 01:11:14 +0000 (0:00:01.075) 0:03:04.017 ******* 2026-02-02 01:18:06.847798 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.847815 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.847825 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.847834 | orchestrator | 2026-02-02 01:18:06.847845 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-02 01:18:06.847854 | orchestrator | 2026-02-02 01:18:06.847889 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-02 01:18:06.847899 | orchestrator | Monday 02 February 2026 01:11:15 +0000 (0:00:00.579) 0:03:04.596 ******* 2026-02-02 01:18:06.847909 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:18:06.847920 | orchestrator | 2026-02-02 01:18:06.847930 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-02-02 01:18:06.847940 | orchestrator | Monday 02 February 2026 01:11:15 +0000 (0:00:00.585) 0:03:05.182 ******* 2026-02-02 01:18:06.847950 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-02 01:18:06.847964 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-02 01:18:06.847978 | orchestrator | 2026-02-02 01:18:06.847988 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-02-02 01:18:06.847998 | orchestrator | Monday 02 February 2026 01:11:19 +0000 (0:00:03.460) 0:03:08.642 ******* 2026-02-02 01:18:06.848008 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-02 01:18:06.848019 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-02 01:18:06.848029 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-02 01:18:06.848039 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-02 01:18:06.848049 | orchestrator | 2026-02-02 01:18:06.848059 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-02 01:18:06.848069 | orchestrator | Monday 02 February 2026 01:11:26 +0000 (0:00:06.984) 0:03:15.627 ******* 2026-02-02 01:18:06.848079 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 01:18:06.848088 | orchestrator | 2026-02-02 01:18:06.848098 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-02 01:18:06.848108 | orchestrator | Monday 02 February 2026 01:11:29 +0000 (0:00:03.459) 0:03:19.086 ******* 2026-02-02 01:18:06.848117 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-02 01:18:06.848127 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 01:18:06.848137 | orchestrator | 2026-02-02 01:18:06.848146 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-02 01:18:06.848156 | orchestrator | Monday 02 February 2026 01:11:34 +0000 (0:00:04.258) 0:03:23.344 ******* 2026-02-02 01:18:06.848166 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 01:18:06.848176 | orchestrator | 2026-02-02 01:18:06.848186 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-02-02 01:18:06.848195 | orchestrator | Monday 02 February 2026 01:11:37 +0000 (0:00:03.533) 0:03:26.878 ******* 2026-02-02 01:18:06.848215 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-02 01:18:06.848225 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-02 01:18:06.848234 | orchestrator | 2026-02-02 01:18:06.848249 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-02 01:18:06.848269 | orchestrator | Monday 02 February 2026 01:11:45 +0000 (0:00:07.754) 0:03:34.632 ******* 2026-02-02 01:18:06.848286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.848302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.848315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.848342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.848362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.848374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.848387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.848399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.848415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.848425 | orchestrator | 2026-02-02 01:18:06.848445 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-02 01:18:06.848456 | orchestrator | Monday 02 February 2026 01:11:46 +0000 (0:00:01.630) 0:03:36.263 ******* 2026-02-02 01:18:06.848466 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.848476 | orchestrator | 2026-02-02 01:18:06.848486 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-02 01:18:06.848496 | orchestrator | Monday 02 February 2026 01:11:47 +0000 (0:00:00.161) 0:03:36.424 ******* 2026-02-02 01:18:06.848506 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.848516 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.848526 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.848535 | orchestrator | 2026-02-02 01:18:06.848545 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-02 01:18:06.848583 | orchestrator | Monday 02 February 2026 01:11:47 +0000 (0:00:00.584) 0:03:37.009 ******* 2026-02-02 01:18:06.848593 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 01:18:06.848603 | orchestrator | 2026-02-02 01:18:06.848617 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-02 01:18:06.848633 | orchestrator | Monday 02 February 2026 01:11:48 +0000 (0:00:00.830) 0:03:37.839 ******* 2026-02-02 01:18:06.848649 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.848666 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.848682 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.848698 | orchestrator | 2026-02-02 01:18:06.848711 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-02 01:18:06.848721 | orchestrator | Monday 02 February 2026 01:11:48 +0000 (0:00:00.298) 0:03:38.138 ******* 2026-02-02 01:18:06.848731 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:18:06.848741 | orchestrator | 2026-02-02 01:18:06.848751 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-02 01:18:06.848760 | orchestrator | Monday 02 February 2026 01:11:49 +0000 (0:00:00.595) 0:03:38.733 ******* 2026-02-02 01:18:06.848772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.848784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.848838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.848851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.848863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.848881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.848903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.848914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.848925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.848935 | orchestrator | 2026-02-02 01:18:06.848945 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-02 01:18:06.848955 | orchestrator | Monday 02 February 2026 01:11:53 +0000 (0:00:03.663) 0:03:42.397 ******* 2026-02-02 01:18:06.848966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.848983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.848995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.849011 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.849029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.849069 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.849080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.849125 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.849135 | orchestrator | 2026-02-02 01:18:06.849145 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-02 01:18:06.849155 | orchestrator | Monday 02 February 2026 01:11:53 +0000 (0:00:00.782) 0:03:43.179 ******* 2026-02-02 01:18:06.849166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.849217 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.849227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.849266 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.849277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.849322 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.849337 | orchestrator | 2026-02-02 01:18:06.849347 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-02 01:18:06.849357 | orchestrator | Monday 02 February 2026 01:11:54 +0000 (0:00:00.956) 0:03:44.135 ******* 2026-02-02 01:18:06.849368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.849380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.849404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.849416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.849435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.849446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.849470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.849481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.849492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.849511 | orchestrator | 2026-02-02 01:18:06.849521 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-02 01:18:06.849532 | orchestrator | Monday 02 February 2026 01:11:58 +0000 (0:00:03.888) 0:03:48.024 ******* 2026-02-02 01:18:06.849542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.849575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.849598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.849610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.849628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.849639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.849662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.849674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.849690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.849701 | orchestrator | 2026-02-02 01:18:06.849711 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-02 01:18:06.849721 | orchestrator | Monday 02 February 2026 01:12:06 +0000 (0:00:08.220) 0:03:56.244 ******* 2026-02-02 01:18:06.849732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.849775 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.849786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.849826 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.849846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.849875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.849885 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.849896 | orchestrator | 2026-02-02 01:18:06.849906 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-02 01:18:06.849916 | orchestrator | Monday 02 February 2026 01:12:07 +0000 (0:00:00.727) 0:03:56.972 ******* 2026-02-02 01:18:06.849926 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.849936 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.849945 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.849955 | orchestrator | 2026-02-02 01:18:06.849965 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-02-02 01:18:06.849975 | orchestrator | Monday 02 February 2026 01:12:08 +0000 (0:00:00.712) 0:03:57.685 ******* 2026-02-02 01:18:06.849985 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.849994 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.850004 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.850072 | orchestrator | 2026-02-02 01:18:06.850085 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-02-02 01:18:06.850095 | orchestrator | Monday 02 February 2026 01:12:09 +0000 (0:00:01.019) 0:03:58.705 ******* 2026-02-02 01:18:06.850105 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-02-02 01:18:06.850115 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-02 01:18:06.850125 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.850135 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-02-02 01:18:06.850144 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-02 01:18:06.850154 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.850164 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-02-02 01:18:06.850173 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-02 01:18:06.850183 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.850193 | orchestrator | 2026-02-02 01:18:06.850203 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-02-02 01:18:06.850213 | orchestrator | Monday 02 February 2026 01:12:10 +0000 (0:00:00.628) 0:03:59.333 ******* 2026-02-02 01:18:06.850223 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-02-02 01:18:06.850234 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-02-02 01:18:06.850244 | orchestrator | 2026-02-02 01:18:06.850254 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-02-02 01:18:06.850264 | orchestrator | Monday 02 February 2026 01:12:11 +0000 (0:00:01.402) 0:04:00.735 ******* 2026-02-02 01:18:06.850280 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.850290 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:18:06.850300 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:18:06.850310 | orchestrator | 2026-02-02 01:18:06.850319 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-02-02 01:18:06.850329 | orchestrator | Monday 02 February 2026 01:12:14 +0000 (0:00:02.563) 0:04:03.299 ******* 2026-02-02 01:18:06.850339 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.850349 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:18:06.850358 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:18:06.850368 | orchestrator | 2026-02-02 01:18:06.850378 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-02-02 01:18:06.850393 | orchestrator | Monday 02 February 2026 01:12:16 +0000 (0:00:02.265) 0:04:05.565 ******* 2026-02-02 01:18:06.850418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.850431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.850443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.850471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.850483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.850495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 01:18:06.850507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.850518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.850534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.850544 | orchestrator | 2026-02-02 01:18:06.850583 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-02-02 01:18:06.850604 | orchestrator | Monday 02 February 2026 01:12:19 +0000 (0:00:02.867) 0:04:08.432 ******* 2026-02-02 01:18:06.850614 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:18:06.850625 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:18:06.850635 | orchestrator | } 2026-02-02 01:18:06.850645 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:18:06.850655 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:18:06.850665 | orchestrator | } 2026-02-02 01:18:06.850675 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:18:06.850684 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:18:06.850694 | orchestrator | } 2026-02-02 01:18:06.850704 | orchestrator | 2026-02-02 01:18:06.850714 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:18:06.850723 | orchestrator | Monday 02 February 2026 01:12:19 +0000 (0:00:00.633) 0:04:09.066 ******* 2026-02-02 01:18:06.850734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.850746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.850767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.850778 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.850798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.850810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.850822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.850832 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.850843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.850861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 01:18:06.850883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.850894 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.850904 | orchestrator | 2026-02-02 01:18:06.850914 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-02 01:18:06.850924 | orchestrator | Monday 02 February 2026 01:12:20 +0000 (0:00:00.976) 0:04:10.042 ******* 2026-02-02 01:18:06.850934 | orchestrator | 2026-02-02 01:18:06.850944 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-02 01:18:06.850954 | orchestrator | Monday 02 February 2026 01:12:20 +0000 (0:00:00.136) 0:04:10.179 ******* 2026-02-02 01:18:06.850964 | orchestrator | 2026-02-02 01:18:06.850974 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-02 01:18:06.850984 | orchestrator | Monday 02 February 2026 01:12:21 +0000 (0:00:00.133) 0:04:10.312 ******* 2026-02-02 01:18:06.850993 | orchestrator | 2026-02-02 01:18:06.851003 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-02 01:18:06.851013 | orchestrator | Monday 02 February 2026 01:12:21 +0000 (0:00:00.338) 0:04:10.651 ******* 2026-02-02 01:18:06.851023 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.851032 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:18:06.851042 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:18:06.851052 | orchestrator | 2026-02-02 01:18:06.851062 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-02 01:18:06.851072 | orchestrator | Monday 02 February 2026 01:12:36 +0000 (0:00:14.717) 0:04:25.368 ******* 2026-02-02 01:18:06.851081 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.851091 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:18:06.851101 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:18:06.851121 | orchestrator | 2026-02-02 01:18:06.851131 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-02-02 01:18:06.851141 | orchestrator | Monday 02 February 2026 01:12:41 +0000 (0:00:05.664) 0:04:31.033 ******* 2026-02-02 01:18:06.851151 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:18:06.851161 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:18:06.851170 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.851180 | orchestrator | 2026-02-02 01:18:06.851190 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-02 01:18:06.851200 | orchestrator | 2026-02-02 01:18:06.851210 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-02 01:18:06.851219 | orchestrator | Monday 02 February 2026 01:12:46 +0000 (0:00:04.854) 0:04:35.887 ******* 2026-02-02 01:18:06.851230 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:18:06.851240 | orchestrator | 2026-02-02 01:18:06.851249 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-02 01:18:06.851259 | orchestrator | Monday 02 February 2026 01:12:47 +0000 (0:00:01.287) 0:04:37.175 ******* 2026-02-02 01:18:06.851269 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.851279 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.851288 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.851298 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.851308 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.851318 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.851327 | orchestrator | 2026-02-02 01:18:06.851337 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-02-02 01:18:06.851347 | orchestrator | Monday 02 February 2026 01:12:48 +0000 (0:00:00.663) 0:04:37.838 ******* 2026-02-02 01:18:06.851357 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:18:06.851366 | orchestrator | 2026-02-02 01:18:06.851376 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-02-02 01:18:06.851386 | orchestrator | Monday 02 February 2026 01:13:11 +0000 (0:00:23.324) 0:05:01.162 ******* 2026-02-02 01:18:06.851396 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:18:06.851406 | orchestrator | 2026-02-02 01:18:06.851416 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-02-02 01:18:06.851425 | orchestrator | Monday 02 February 2026 01:13:13 +0000 (0:00:01.492) 0:05:02.655 ******* 2026-02-02 01:18:06.851435 | orchestrator | included: service-image-info for testbed-node-3 2026-02-02 01:18:06.851445 | orchestrator | 2026-02-02 01:18:06.851455 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-02-02 01:18:06.851465 | orchestrator | Monday 02 February 2026 01:13:14 +0000 (0:00:00.847) 0:05:03.503 ******* 2026-02-02 01:18:06.851475 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:18:06.851484 | orchestrator | 2026-02-02 01:18:06.851494 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-02-02 01:18:06.851504 | orchestrator | Monday 02 February 2026 01:13:17 +0000 (0:00:03.494) 0:05:06.997 ******* 2026-02-02 01:18:06.851514 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:18:06.851524 | orchestrator | 2026-02-02 01:18:06.851533 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-02-02 01:18:06.851543 | orchestrator | Monday 02 February 2026 01:13:19 +0000 (0:00:02.180) 0:05:09.177 ******* 2026-02-02 01:18:06.851598 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.851609 | orchestrator | 2026-02-02 01:18:06.851619 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-02-02 01:18:06.851629 | orchestrator | Monday 02 February 2026 01:13:22 +0000 (0:00:02.406) 0:05:11.583 ******* 2026-02-02 01:18:06.851639 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.851649 | orchestrator | 2026-02-02 01:18:06.851659 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-02-02 01:18:06.851680 | orchestrator | Monday 02 February 2026 01:13:24 +0000 (0:00:01.991) 0:05:13.575 ******* 2026-02-02 01:18:06.851698 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-02 01:18:06.851708 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 01:18:06.851718 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 01:18:06.851728 | orchestrator | 2026-02-02 01:18:06.851738 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-02-02 01:18:06.851748 | orchestrator | Monday 02 February 2026 01:13:34 +0000 (0:00:09.730) 0:05:23.306 ******* 2026-02-02 01:18:06.851758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 01:18:06.851768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 01:18:06.851778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 01:18:06.851787 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.851797 | orchestrator | 2026-02-02 01:18:06.851807 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-02-02 01:18:06.851817 | orchestrator | Monday 02 February 2026 01:13:39 +0000 (0:00:05.493) 0:05:28.799 ******* 2026-02-02 01:18:06.851827 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-3', 'ansible_loop_var': 'item'})  2026-02-02 01:18:06.851838 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-4', 'ansible_loop_var': 'item'})  2026-02-02 01:18:06.851849 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-5', 'ansible_loop_var': 'item'})  2026-02-02 01:18:06.851859 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.851869 | orchestrator | 2026-02-02 01:18:06.851879 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-02 01:18:06.851889 | orchestrator | Monday 02 February 2026 01:13:43 +0000 (0:00:03.613) 0:05:32.413 ******* 2026-02-02 01:18:06.851899 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.851909 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.851919 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.851928 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 01:18:06.851938 | orchestrator | 2026-02-02 01:18:06.851948 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-02 01:18:06.851958 | orchestrator | Monday 02 February 2026 01:13:44 +0000 (0:00:01.035) 0:05:33.448 ******* 2026-02-02 01:18:06.851968 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-02 01:18:06.851978 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-02 01:18:06.851988 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-02 01:18:06.851998 | orchestrator | 2026-02-02 01:18:06.852008 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-02 01:18:06.852018 | orchestrator | Monday 02 February 2026 01:13:44 +0000 (0:00:00.680) 0:05:34.129 ******* 2026-02-02 01:18:06.852028 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-02 01:18:06.852037 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-02 01:18:06.852047 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-02 01:18:06.852057 | orchestrator | 2026-02-02 01:18:06.852067 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-02 01:18:06.852077 | orchestrator | Monday 02 February 2026 01:13:45 +0000 (0:00:01.131) 0:05:35.261 ******* 2026-02-02 01:18:06.852096 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-02 01:18:06.852106 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.852116 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-02 01:18:06.852125 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.852134 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-02 01:18:06.852142 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.852150 | orchestrator | 2026-02-02 01:18:06.852158 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-02 01:18:06.852167 | orchestrator | Monday 02 February 2026 01:13:46 +0000 (0:00:00.806) 0:05:36.068 ******* 2026-02-02 01:18:06.852175 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 01:18:06.852183 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 01:18:06.852191 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.852200 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 01:18:06.852208 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 01:18:06.852216 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-02 01:18:06.852224 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.852232 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-02 01:18:06.852249 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 01:18:06.852257 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 01:18:06.852265 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.852274 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-02 01:18:06.852282 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-02 01:18:06.852290 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-02 01:18:06.852298 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-02 01:18:06.852306 | orchestrator | 2026-02-02 01:18:06.852314 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-02 01:18:06.852322 | orchestrator | Monday 02 February 2026 01:13:48 +0000 (0:00:02.062) 0:05:38.131 ******* 2026-02-02 01:18:06.852330 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.852338 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.852346 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.852354 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:18:06.852362 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:18:06.852371 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:18:06.852379 | orchestrator | 2026-02-02 01:18:06.852387 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-02 01:18:06.852395 | orchestrator | Monday 02 February 2026 01:13:50 +0000 (0:00:01.263) 0:05:39.394 ******* 2026-02-02 01:18:06.852403 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.852411 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.852419 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.852427 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:18:06.852435 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:18:06.852443 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:18:06.852451 | orchestrator | 2026-02-02 01:18:06.852459 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-02 01:18:06.852467 | orchestrator | Monday 02 February 2026 01:13:51 +0000 (0:00:01.792) 0:05:41.186 ******* 2026-02-02 01:18:06.852477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852500 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852640 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852648 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852657 | orchestrator | 2026-02-02 01:18:06.852665 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-02 01:18:06.852673 | orchestrator | Monday 02 February 2026 01:13:54 +0000 (0:00:02.563) 0:05:43.750 ******* 2026-02-02 01:18:06.852682 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 01:18:06.852690 | orchestrator | 2026-02-02 01:18:06.852698 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-02 01:18:06.852707 | orchestrator | Monday 02 February 2026 01:13:55 +0000 (0:00:01.310) 0:05:45.060 ******* 2026-02-02 01:18:06.852724 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852734 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852815 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852858 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852868 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.852890 | orchestrator | 2026-02-02 01:18:06.852898 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-02 01:18:06.852907 | orchestrator | Monday 02 February 2026 01:13:59 +0000 (0:00:03.780) 0:05:48.840 ******* 2026-02-02 01:18:06.852915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.852925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.852942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.852956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.852965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.852974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.852982 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.852991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.853000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.853012 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.853026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 01:18:06.853039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.853048 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.853056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.853064 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.853073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 01:18:06.853082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.853090 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.853098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 01:18:06.853115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.853129 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.853137 | orchestrator | 2026-02-02 01:18:06.853145 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-02 01:18:06.853153 | orchestrator | Monday 02 February 2026 01:14:01 +0000 (0:00:02.283) 0:05:51.124 ******* 2026-02-02 01:18:06.853162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.853171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.853179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 01:18:06.853187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.853204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.853217 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.853226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.853234 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.853243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.853251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.853259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 01:18:06.853268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.853281 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.853388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.853399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.853408 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.853416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.853425 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.853433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 01:18:06.853442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.853450 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.853458 | orchestrator | 2026-02-02 01:18:06.853466 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-02 01:18:06.853475 | orchestrator | Monday 02 February 2026 01:14:04 +0000 (0:00:02.558) 0:05:53.683 ******* 2026-02-02 01:18:06.853483 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.853499 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.853507 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.853515 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 01:18:06.853523 | orchestrator | 2026-02-02 01:18:06.853531 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-02 01:18:06.853540 | orchestrator | Monday 02 February 2026 01:14:05 +0000 (0:00:00.975) 0:05:54.658 ******* 2026-02-02 01:18:06.853590 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 01:18:06.853600 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 01:18:06.853609 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 01:18:06.853617 | orchestrator | 2026-02-02 01:18:06.853625 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-02 01:18:06.853634 | orchestrator | Monday 02 February 2026 01:14:06 +0000 (0:00:01.276) 0:05:55.935 ******* 2026-02-02 01:18:06.853642 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 01:18:06.853650 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 01:18:06.853657 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 01:18:06.853665 | orchestrator | 2026-02-02 01:18:06.853678 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-02 01:18:06.853691 | orchestrator | Monday 02 February 2026 01:14:07 +0000 (0:00:01.003) 0:05:56.938 ******* 2026-02-02 01:18:06.853699 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:18:06.853708 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:18:06.853716 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:18:06.853724 | orchestrator | 2026-02-02 01:18:06.853732 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-02 01:18:06.853740 | orchestrator | Monday 02 February 2026 01:14:08 +0000 (0:00:00.514) 0:05:57.453 ******* 2026-02-02 01:18:06.853748 | orchestrator | ok: [testbed-node-3] 2026-02-02 01:18:06.853756 | orchestrator | ok: [testbed-node-4] 2026-02-02 01:18:06.853764 | orchestrator | ok: [testbed-node-5] 2026-02-02 01:18:06.853772 | orchestrator | 2026-02-02 01:18:06.853780 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-02 01:18:06.853788 | orchestrator | Monday 02 February 2026 01:14:08 +0000 (0:00:00.535) 0:05:57.988 ******* 2026-02-02 01:18:06.853796 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-02 01:18:06.853804 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-02 01:18:06.853812 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-02 01:18:06.853821 | orchestrator | 2026-02-02 01:18:06.853829 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-02 01:18:06.853837 | orchestrator | Monday 02 February 2026 01:14:10 +0000 (0:00:01.341) 0:05:59.330 ******* 2026-02-02 01:18:06.853845 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-02 01:18:06.853853 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-02 01:18:06.853861 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-02 01:18:06.853869 | orchestrator | 2026-02-02 01:18:06.853877 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-02 01:18:06.853885 | orchestrator | Monday 02 February 2026 01:14:11 +0000 (0:00:01.088) 0:06:00.419 ******* 2026-02-02 01:18:06.853893 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-02 01:18:06.853901 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-02 01:18:06.853909 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-02 01:18:06.853917 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-02 01:18:06.853925 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-02 01:18:06.853933 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-02 01:18:06.853941 | orchestrator | 2026-02-02 01:18:06.853948 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-02 01:18:06.853960 | orchestrator | Monday 02 February 2026 01:14:14 +0000 (0:00:03.612) 0:06:04.031 ******* 2026-02-02 01:18:06.853967 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.853974 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.853981 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.853987 | orchestrator | 2026-02-02 01:18:06.853994 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-02 01:18:06.854001 | orchestrator | Monday 02 February 2026 01:14:15 +0000 (0:00:00.328) 0:06:04.360 ******* 2026-02-02 01:18:06.854008 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.854038 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.854047 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.854054 | orchestrator | 2026-02-02 01:18:06.854061 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-02 01:18:06.854068 | orchestrator | Monday 02 February 2026 01:14:15 +0000 (0:00:00.565) 0:06:04.926 ******* 2026-02-02 01:18:06.854075 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:18:06.854081 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:18:06.854088 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:18:06.854095 | orchestrator | 2026-02-02 01:18:06.854102 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-02 01:18:06.854109 | orchestrator | Monday 02 February 2026 01:14:16 +0000 (0:00:01.325) 0:06:06.251 ******* 2026-02-02 01:18:06.854116 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-02-02 01:18:06.854124 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-02-02 01:18:06.854131 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-02-02 01:18:06.854139 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-02-02 01:18:06.854147 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-02-02 01:18:06.854154 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-02-02 01:18:06.854161 | orchestrator | 2026-02-02 01:18:06.854168 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-02 01:18:06.854175 | orchestrator | Monday 02 February 2026 01:14:20 +0000 (0:00:03.682) 0:06:09.933 ******* 2026-02-02 01:18:06.854182 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-02 01:18:06.854198 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-02 01:18:06.854205 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-02 01:18:06.854212 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-02 01:18:06.854219 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:18:06.854225 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-02 01:18:06.854232 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:18:06.854239 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-02 01:18:06.854245 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:18:06.854252 | orchestrator | 2026-02-02 01:18:06.854259 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-02 01:18:06.854266 | orchestrator | Monday 02 February 2026 01:14:23 +0000 (0:00:03.281) 0:06:13.215 ******* 2026-02-02 01:18:06.854273 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.854284 | orchestrator | 2026-02-02 01:18:06.854291 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-02 01:18:06.854298 | orchestrator | Monday 02 February 2026 01:14:24 +0000 (0:00:00.167) 0:06:13.382 ******* 2026-02-02 01:18:06.854305 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.854312 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.854318 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.854326 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.854332 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.854339 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.854346 | orchestrator | 2026-02-02 01:18:06.854353 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-02 01:18:06.854359 | orchestrator | Monday 02 February 2026 01:14:24 +0000 (0:00:00.842) 0:06:14.224 ******* 2026-02-02 01:18:06.854366 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 01:18:06.854373 | orchestrator | 2026-02-02 01:18:06.854380 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-02 01:18:06.854386 | orchestrator | Monday 02 February 2026 01:14:25 +0000 (0:00:00.799) 0:06:15.024 ******* 2026-02-02 01:18:06.854393 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.854400 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.854407 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.854413 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.854420 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.854427 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.854433 | orchestrator | 2026-02-02 01:18:06.854440 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-02 01:18:06.854447 | orchestrator | Monday 02 February 2026 01:14:26 +0000 (0:00:00.661) 0:06:15.686 ******* 2026-02-02 01:18:06.854454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854497 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854511 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854519 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854545 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854589 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854608 | orchestrator | 2026-02-02 01:18:06.854614 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-02 01:18:06.854621 | orchestrator | Monday 02 February 2026 01:14:30 +0000 (0:00:04.130) 0:06:19.816 ******* 2026-02-02 01:18:06.854636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.854644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.854651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.854658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.854666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.854722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.854731 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854759 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.854809 | orchestrator | 2026-02-02 01:18:06.854816 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-02 01:18:06.854823 | orchestrator | Monday 02 February 2026 01:14:37 +0000 (0:00:06.868) 0:06:26.684 ******* 2026-02-02 01:18:06.854830 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.854837 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.854844 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.854850 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.854857 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.854864 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.854871 | orchestrator | 2026-02-02 01:18:06.854878 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-02 01:18:06.854884 | orchestrator | Monday 02 February 2026 01:14:39 +0000 (0:00:01.688) 0:06:28.373 ******* 2026-02-02 01:18:06.854891 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-02 01:18:06.854906 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-02 01:18:06.854912 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-02 01:18:06.854919 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-02 01:18:06.854926 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-02 01:18:06.854933 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-02 01:18:06.854940 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-02 01:18:06.854947 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.854954 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-02 01:18:06.854961 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.854968 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-02 01:18:06.854974 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.854981 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-02 01:18:06.854988 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-02 01:18:06.854995 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-02 01:18:06.855002 | orchestrator | 2026-02-02 01:18:06.855009 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-02 01:18:06.855016 | orchestrator | Monday 02 February 2026 01:14:43 +0000 (0:00:04.193) 0:06:32.566 ******* 2026-02-02 01:18:06.855023 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.855029 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.855044 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.855051 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.855058 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.855065 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.855072 | orchestrator | 2026-02-02 01:18:06.855079 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-02 01:18:06.855086 | orchestrator | Monday 02 February 2026 01:14:44 +0000 (0:00:00.864) 0:06:33.430 ******* 2026-02-02 01:18:06.855093 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-02 01:18:06.855100 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-02 01:18:06.855107 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-02 01:18:06.855114 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-02 01:18:06.855121 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-02 01:18:06.855128 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-02 01:18:06.855135 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-02 01:18:06.855142 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-02 01:18:06.855148 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-02 01:18:06.855155 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-02 01:18:06.855162 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.855173 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-02 01:18:06.855180 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.855187 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-02 01:18:06.855194 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.855201 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-02 01:18:06.855208 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-02 01:18:06.855215 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-02 01:18:06.855222 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-02 01:18:06.855228 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-02 01:18:06.855235 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-02 01:18:06.855242 | orchestrator | 2026-02-02 01:18:06.855249 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-02 01:18:06.855256 | orchestrator | Monday 02 February 2026 01:14:49 +0000 (0:00:05.149) 0:06:38.580 ******* 2026-02-02 01:18:06.855263 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 01:18:06.855270 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 01:18:06.855277 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 01:18:06.855284 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-02 01:18:06.855291 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-02 01:18:06.855298 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-02 01:18:06.855305 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-02 01:18:06.855312 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-02 01:18:06.855319 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-02 01:18:06.855325 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 01:18:06.855332 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 01:18:06.855339 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-02 01:18:06.855346 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.855353 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 01:18:06.855360 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-02 01:18:06.855374 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.855381 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-02 01:18:06.855388 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.855395 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-02 01:18:06.855402 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-02 01:18:06.855409 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-02 01:18:06.855415 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-02 01:18:06.855427 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-02 01:18:06.855434 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-02 01:18:06.855441 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-02 01:18:06.855448 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-02 01:18:06.855455 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-02 01:18:06.855461 | orchestrator | 2026-02-02 01:18:06.855468 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-02 01:18:06.855475 | orchestrator | Monday 02 February 2026 01:14:57 +0000 (0:00:07.875) 0:06:46.455 ******* 2026-02-02 01:18:06.855482 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.855489 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.855496 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.855503 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.855510 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.855517 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.855523 | orchestrator | 2026-02-02 01:18:06.855530 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-02 01:18:06.855537 | orchestrator | Monday 02 February 2026 01:14:57 +0000 (0:00:00.615) 0:06:47.071 ******* 2026-02-02 01:18:06.855544 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.855562 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.855569 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.855576 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.855583 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.855590 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.855597 | orchestrator | 2026-02-02 01:18:06.855604 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-02 01:18:06.855610 | orchestrator | Monday 02 February 2026 01:14:58 +0000 (0:00:00.858) 0:06:47.930 ******* 2026-02-02 01:18:06.855617 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.855624 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.855631 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:18:06.855638 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:18:06.855645 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.855651 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:18:06.855658 | orchestrator | 2026-02-02 01:18:06.855665 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-02 01:18:06.855672 | orchestrator | Monday 02 February 2026 01:15:00 +0000 (0:00:02.110) 0:06:50.040 ******* 2026-02-02 01:18:06.855679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.855687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.855708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.855716 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.855723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.855731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.855738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.855745 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.855752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.855770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.855777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.855784 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.855792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 01:18:06.855799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.855806 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.855813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 01:18:06.855820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.855831 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.855838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 01:18:06.855852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.855859 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.855866 | orchestrator | 2026-02-02 01:18:06.855873 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-02 01:18:06.855880 | orchestrator | Monday 02 February 2026 01:15:02 +0000 (0:00:01.842) 0:06:51.883 ******* 2026-02-02 01:18:06.855887 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-02 01:18:06.855894 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-02 01:18:06.855901 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.855907 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-02 01:18:06.855914 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-02 01:18:06.855921 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.855928 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-02 01:18:06.855935 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-02 01:18:06.855941 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.855948 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-02 01:18:06.855955 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-02 01:18:06.855962 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.855968 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-02 01:18:06.855975 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-02 01:18:06.855982 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.855989 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-02 01:18:06.855995 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-02 01:18:06.856002 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.856009 | orchestrator | 2026-02-02 01:18:06.856016 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-02-02 01:18:06.856023 | orchestrator | Monday 02 February 2026 01:15:03 +0000 (0:00:00.700) 0:06:52.584 ******* 2026-02-02 01:18:06.856030 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856042 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856057 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856098 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 01:18:06.856169 | orchestrator | 2026-02-02 01:18:06.856176 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-02-02 01:18:06.856189 | orchestrator | Monday 02 February 2026 01:15:06 +0000 (0:00:03.419) 0:06:56.003 ******* 2026-02-02 01:18:06.856197 | orchestrator | changed: [testbed-node-3] => { 2026-02-02 01:18:06.856204 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:18:06.856211 | orchestrator | } 2026-02-02 01:18:06.856217 | orchestrator | changed: [testbed-node-4] => { 2026-02-02 01:18:06.856224 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:18:06.856231 | orchestrator | } 2026-02-02 01:18:06.856238 | orchestrator | changed: [testbed-node-5] => { 2026-02-02 01:18:06.856245 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:18:06.856252 | orchestrator | } 2026-02-02 01:18:06.856258 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 01:18:06.856265 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:18:06.856272 | orchestrator | } 2026-02-02 01:18:06.856279 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 01:18:06.856286 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:18:06.856293 | orchestrator | } 2026-02-02 01:18:06.856299 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 01:18:06.856306 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 01:18:06.856313 | orchestrator | } 2026-02-02 01:18:06.856320 | orchestrator | 2026-02-02 01:18:06.856327 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 01:18:06.856334 | orchestrator | Monday 02 February 2026 01:15:07 +0000 (0:00:00.807) 0:06:56.811 ******* 2026-02-02 01:18:06.856341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.856355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.856363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.856370 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.856377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.856431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.856440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.856453 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.856460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 01:18:06.856467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 01:18:06.856474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.856481 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.856495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 01:18:06.856503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.856510 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.856517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 01:18:06.856528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.856536 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.856543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 01:18:06.856590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 01:18:06.856598 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.856605 | orchestrator | 2026-02-02 01:18:06.856612 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-02 01:18:06.856619 | orchestrator | Monday 02 February 2026 01:15:09 +0000 (0:00:02.345) 0:06:59.156 ******* 2026-02-02 01:18:06.856626 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.856632 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.856639 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.856646 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.856652 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.856659 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.856666 | orchestrator | 2026-02-02 01:18:06.856673 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-02 01:18:06.856679 | orchestrator | Monday 02 February 2026 01:15:10 +0000 (0:00:00.903) 0:07:00.060 ******* 2026-02-02 01:18:06.856686 | orchestrator | 2026-02-02 01:18:06.856693 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-02 01:18:06.856700 | orchestrator | Monday 02 February 2026 01:15:10 +0000 (0:00:00.145) 0:07:00.205 ******* 2026-02-02 01:18:06.856706 | orchestrator | 2026-02-02 01:18:06.856713 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-02 01:18:06.856720 | orchestrator | Monday 02 February 2026 01:15:11 +0000 (0:00:00.137) 0:07:00.343 ******* 2026-02-02 01:18:06.856727 | orchestrator | 2026-02-02 01:18:06.856746 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-02 01:18:06.856754 | orchestrator | Monday 02 February 2026 01:15:11 +0000 (0:00:00.178) 0:07:00.521 ******* 2026-02-02 01:18:06.856760 | orchestrator | 2026-02-02 01:18:06.856767 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-02 01:18:06.856774 | orchestrator | Monday 02 February 2026 01:15:11 +0000 (0:00:00.135) 0:07:00.657 ******* 2026-02-02 01:18:06.856781 | orchestrator | 2026-02-02 01:18:06.856788 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-02 01:18:06.856795 | orchestrator | Monday 02 February 2026 01:15:11 +0000 (0:00:00.321) 0:07:00.978 ******* 2026-02-02 01:18:06.856801 | orchestrator | 2026-02-02 01:18:06.856808 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-02 01:18:06.856815 | orchestrator | Monday 02 February 2026 01:15:11 +0000 (0:00:00.140) 0:07:01.119 ******* 2026-02-02 01:18:06.856822 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:18:06.856829 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:18:06.856836 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.856843 | orchestrator | 2026-02-02 01:18:06.856849 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-02 01:18:06.856856 | orchestrator | Monday 02 February 2026 01:15:22 +0000 (0:00:10.210) 0:07:11.330 ******* 2026-02-02 01:18:06.856863 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.856870 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:18:06.856877 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:18:06.856883 | orchestrator | 2026-02-02 01:18:06.856890 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-02 01:18:06.856897 | orchestrator | Monday 02 February 2026 01:15:38 +0000 (0:00:16.911) 0:07:28.242 ******* 2026-02-02 01:18:06.856904 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:18:06.856911 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:18:06.856917 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:18:06.856924 | orchestrator | 2026-02-02 01:18:06.856931 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-02 01:18:06.856938 | orchestrator | Monday 02 February 2026 01:15:56 +0000 (0:00:17.160) 0:07:45.403 ******* 2026-02-02 01:18:06.856945 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:18:06.856952 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:18:06.856958 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:18:06.856965 | orchestrator | 2026-02-02 01:18:06.856971 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-02 01:18:06.856978 | orchestrator | Monday 02 February 2026 01:16:26 +0000 (0:00:30.629) 0:08:16.032 ******* 2026-02-02 01:18:06.856985 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:18:06.856992 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:18:06.856998 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:18:06.857005 | orchestrator | 2026-02-02 01:18:06.857011 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-02 01:18:06.857017 | orchestrator | Monday 02 February 2026 01:16:27 +0000 (0:00:00.809) 0:08:16.841 ******* 2026-02-02 01:18:06.857023 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:18:06.857029 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:18:06.857036 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:18:06.857042 | orchestrator | 2026-02-02 01:18:06.857048 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-02 01:18:06.857054 | orchestrator | Monday 02 February 2026 01:16:28 +0000 (0:00:00.727) 0:08:17.569 ******* 2026-02-02 01:18:06.857061 | orchestrator | changed: [testbed-node-5] 2026-02-02 01:18:06.857067 | orchestrator | changed: [testbed-node-4] 2026-02-02 01:18:06.857073 | orchestrator | changed: [testbed-node-3] 2026-02-02 01:18:06.857079 | orchestrator | 2026-02-02 01:18:06.857086 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-02 01:18:06.857092 | orchestrator | Monday 02 February 2026 01:16:51 +0000 (0:00:23.623) 0:08:41.193 ******* 2026-02-02 01:18:06.857102 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.857109 | orchestrator | 2026-02-02 01:18:06.857115 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-02 01:18:06.857121 | orchestrator | Monday 02 February 2026 01:16:52 +0000 (0:00:00.358) 0:08:41.552 ******* 2026-02-02 01:18:06.857128 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.857134 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.857140 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.857146 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.857153 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.857159 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-02 01:18:06.857166 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-02 01:18:06.857172 | orchestrator | 2026-02-02 01:18:06.857178 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-02 01:18:06.857184 | orchestrator | Monday 02 February 2026 01:17:13 +0000 (0:00:21.512) 0:09:03.065 ******* 2026-02-02 01:18:06.857191 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.857197 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.857203 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.857210 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.857216 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.857222 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.857228 | orchestrator | 2026-02-02 01:18:06.857234 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-02 01:18:06.857241 | orchestrator | Monday 02 February 2026 01:17:24 +0000 (0:00:11.149) 0:09:14.214 ******* 2026-02-02 01:18:06.857247 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.857253 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.857259 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.857266 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.857272 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.857278 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-02-02 01:18:06.857284 | orchestrator | 2026-02-02 01:18:06.857291 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-02 01:18:06.857303 | orchestrator | Monday 02 February 2026 01:17:28 +0000 (0:00:03.686) 0:09:17.901 ******* 2026-02-02 01:18:06.857310 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-02 01:18:06.857316 | orchestrator | 2026-02-02 01:18:06.857322 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-02 01:18:06.857329 | orchestrator | Monday 02 February 2026 01:17:42 +0000 (0:00:13.703) 0:09:31.604 ******* 2026-02-02 01:18:06.857335 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-02 01:18:06.857341 | orchestrator | 2026-02-02 01:18:06.857348 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-02 01:18:06.857354 | orchestrator | Monday 02 February 2026 01:17:43 +0000 (0:00:01.594) 0:09:33.198 ******* 2026-02-02 01:18:06.857360 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.857367 | orchestrator | 2026-02-02 01:18:06.857373 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-02 01:18:06.857379 | orchestrator | Monday 02 February 2026 01:17:45 +0000 (0:00:01.732) 0:09:34.931 ******* 2026-02-02 01:18:06.857386 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-02 01:18:06.857392 | orchestrator | 2026-02-02 01:18:06.857398 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-02 01:18:06.857405 | orchestrator | 2026-02-02 01:18:06.857411 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-02 01:18:06.857417 | orchestrator | Monday 02 February 2026 01:17:58 +0000 (0:00:12.759) 0:09:47.690 ******* 2026-02-02 01:18:06.857423 | orchestrator | changed: [testbed-node-0] 2026-02-02 01:18:06.857434 | orchestrator | changed: [testbed-node-2] 2026-02-02 01:18:06.857440 | orchestrator | changed: [testbed-node-1] 2026-02-02 01:18:06.857447 | orchestrator | 2026-02-02 01:18:06.857453 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-02 01:18:06.857459 | orchestrator | 2026-02-02 01:18:06.857466 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-02 01:18:06.857472 | orchestrator | Monday 02 February 2026 01:17:59 +0000 (0:00:01.145) 0:09:48.836 ******* 2026-02-02 01:18:06.857478 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.857485 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.857491 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.857497 | orchestrator | 2026-02-02 01:18:06.857503 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-02 01:18:06.857510 | orchestrator | 2026-02-02 01:18:06.857516 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-02 01:18:06.857522 | orchestrator | Monday 02 February 2026 01:18:00 +0000 (0:00:00.825) 0:09:49.662 ******* 2026-02-02 01:18:06.857529 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-02 01:18:06.857535 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-02 01:18:06.857541 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-02 01:18:06.857561 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-02 01:18:06.857568 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-02 01:18:06.857574 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-02 01:18:06.857580 | orchestrator | skipping: [testbed-node-3] 2026-02-02 01:18:06.857587 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-02 01:18:06.857593 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-02 01:18:06.857599 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-02 01:18:06.857605 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-02 01:18:06.857612 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-02 01:18:06.857618 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-02 01:18:06.857624 | orchestrator | skipping: [testbed-node-4] 2026-02-02 01:18:06.857631 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-02 01:18:06.857637 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-02 01:18:06.857643 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-02 01:18:06.857650 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-02 01:18:06.857656 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-02 01:18:06.857662 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-02 01:18:06.857669 | orchestrator | skipping: [testbed-node-5] 2026-02-02 01:18:06.857675 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-02 01:18:06.857681 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-02 01:18:06.857687 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-02 01:18:06.857694 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-02 01:18:06.857700 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-02 01:18:06.857706 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-02 01:18:06.857713 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.857719 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-02 01:18:06.857725 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-02 01:18:06.857732 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-02 01:18:06.857738 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-02 01:18:06.857744 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-02 01:18:06.857756 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-02 01:18:06.857762 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.857769 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-02 01:18:06.857775 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-02 01:18:06.857784 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-02 01:18:06.857794 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-02 01:18:06.857801 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-02 01:18:06.857807 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-02 01:18:06.857814 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.857820 | orchestrator | 2026-02-02 01:18:06.857826 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-02 01:18:06.857832 | orchestrator | 2026-02-02 01:18:06.857839 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-02 01:18:06.857845 | orchestrator | Monday 02 February 2026 01:18:01 +0000 (0:00:01.432) 0:09:51.095 ******* 2026-02-02 01:18:06.857851 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-02 01:18:06.857857 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-02 01:18:06.857864 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.857870 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-02 01:18:06.857876 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-02 01:18:06.857882 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.857889 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-02 01:18:06.857895 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-02 01:18:06.857901 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.857907 | orchestrator | 2026-02-02 01:18:06.857913 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-02 01:18:06.857920 | orchestrator | 2026-02-02 01:18:06.857926 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-02 01:18:06.857932 | orchestrator | Monday 02 February 2026 01:18:02 +0000 (0:00:00.607) 0:09:51.703 ******* 2026-02-02 01:18:06.857939 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.857945 | orchestrator | 2026-02-02 01:18:06.857951 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-02 01:18:06.857957 | orchestrator | 2026-02-02 01:18:06.857964 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-02 01:18:06.857970 | orchestrator | Monday 02 February 2026 01:18:03 +0000 (0:00:01.284) 0:09:52.987 ******* 2026-02-02 01:18:06.857976 | orchestrator | skipping: [testbed-node-0] 2026-02-02 01:18:06.857982 | orchestrator | skipping: [testbed-node-1] 2026-02-02 01:18:06.857988 | orchestrator | skipping: [testbed-node-2] 2026-02-02 01:18:06.857995 | orchestrator | 2026-02-02 01:18:06.858001 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:18:06.858007 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:18:06.858034 | orchestrator | testbed-node-0 : ok=59  changed=39  unreachable=0 failed=0 skipped=48  rescued=0 ignored=0 2026-02-02 01:18:06.858042 | orchestrator | testbed-node-1 : ok=32  changed=23  unreachable=0 failed=0 skipped=55  rescued=0 ignored=0 2026-02-02 01:18:06.858048 | orchestrator | testbed-node-2 : ok=32  changed=23  unreachable=0 failed=0 skipped=55  rescued=0 ignored=0 2026-02-02 01:18:06.858055 | orchestrator | testbed-node-3 : ok=44  changed=29  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-02-02 01:18:06.858067 | orchestrator | testbed-node-4 : ok=42  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-02 01:18:06.858073 | orchestrator | testbed-node-5 : ok=37  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-02 01:18:06.858079 | orchestrator | 2026-02-02 01:18:06.858086 | orchestrator | 2026-02-02 01:18:06.858092 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:18:06.858098 | orchestrator | Monday 02 February 2026 01:18:04 +0000 (0:00:00.481) 0:09:53.469 ******* 2026-02-02 01:18:06.858104 | orchestrator | =============================================================================== 2026-02-02 01:18:06.858110 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 35.51s 2026-02-02 01:18:06.858117 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.63s 2026-02-02 01:18:06.858123 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.62s 2026-02-02 01:18:06.858129 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 23.32s 2026-02-02 01:18:06.858135 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.83s 2026-02-02 01:18:06.858141 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.51s 2026-02-02 01:18:06.858148 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.70s 2026-02-02 01:18:06.858154 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 17.16s 2026-02-02 01:18:06.858160 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.91s 2026-02-02 01:18:06.858166 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 14.72s 2026-02-02 01:18:06.858173 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.22s 2026-02-02 01:18:06.858179 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.70s 2026-02-02 01:18:06.858190 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.39s 2026-02-02 01:18:06.858200 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.00s 2026-02-02 01:18:06.858207 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.87s 2026-02-02 01:18:06.858213 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 12.78s 2026-02-02 01:18:06.858219 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.76s 2026-02-02 01:18:06.858225 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.15s 2026-02-02 01:18:06.858232 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 10.21s 2026-02-02 01:18:06.858238 | orchestrator | nova-cell : Get container facts ----------------------------------------- 9.73s 2026-02-02 01:18:06.858244 | orchestrator | 2026-02-02 01:18:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:09.881898 | orchestrator | 2026-02-02 01:18:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:12.923259 | orchestrator | 2026-02-02 01:18:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:15.966569 | orchestrator | 2026-02-02 01:18:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:19.021445 | orchestrator | 2026-02-02 01:18:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:22.066428 | orchestrator | 2026-02-02 01:18:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:25.114547 | orchestrator | 2026-02-02 01:18:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:28.153419 | orchestrator | 2026-02-02 01:18:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:31.199507 | orchestrator | 2026-02-02 01:18:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:34.236124 | orchestrator | 2026-02-02 01:18:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:37.277034 | orchestrator | 2026-02-02 01:18:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:40.311219 | orchestrator | 2026-02-02 01:18:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:43.352706 | orchestrator | 2026-02-02 01:18:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:46.390933 | orchestrator | 2026-02-02 01:18:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:49.433082 | orchestrator | 2026-02-02 01:18:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:52.483735 | orchestrator | 2026-02-02 01:18:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:55.518243 | orchestrator | 2026-02-02 01:18:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:18:58.567102 | orchestrator | 2026-02-02 01:18:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:19:01.610410 | orchestrator | 2026-02-02 01:19:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-02 01:19:04.653262 | orchestrator | 2026-02-02 01:19:05.071289 | orchestrator | 2026-02-02 01:19:05.078317 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Feb 2 01:19:05 UTC 2026 2026-02-02 01:19:05.078443 | orchestrator | 2026-02-02 01:19:05.479370 | orchestrator | ok: Runtime: 0:37:04.715157 2026-02-02 01:19:05.743882 | 2026-02-02 01:19:05.744027 | TASK [Bootstrap services] 2026-02-02 01:19:06.507014 | orchestrator | 2026-02-02 01:19:06.507213 | orchestrator | # BOOTSTRAP 2026-02-02 01:19:06.507238 | orchestrator | 2026-02-02 01:19:06.507253 | orchestrator | + set -e 2026-02-02 01:19:06.507266 | orchestrator | + echo 2026-02-02 01:19:06.507280 | orchestrator | + echo '# BOOTSTRAP' 2026-02-02 01:19:06.507298 | orchestrator | + echo 2026-02-02 01:19:06.507342 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-02 01:19:06.515418 | orchestrator | + set -e 2026-02-02 01:19:06.515512 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-02 01:19:11.937054 | orchestrator | 2026-02-02 01:19:11 | INFO  | It takes a moment until task 41bf6b69-c5ec-4af6-a916-3b5f17e4ead2 (flavor-manager) has been started and output is visible here. 2026-02-02 01:19:21.339998 | orchestrator | 2026-02-02 01:19:16 | INFO  | Flavor SCS-1L-1 created 2026-02-02 01:19:21.340175 | orchestrator | 2026-02-02 01:19:16 | INFO  | Flavor SCS-1L-1-5 created 2026-02-02 01:19:21.340195 | orchestrator | 2026-02-02 01:19:16 | INFO  | Flavor SCS-1V-2 created 2026-02-02 01:19:21.340207 | orchestrator | 2026-02-02 01:19:16 | INFO  | Flavor SCS-1V-2-5 created 2026-02-02 01:19:21.340219 | orchestrator | 2026-02-02 01:19:16 | INFO  | Flavor SCS-1V-4 created 2026-02-02 01:19:21.340230 | orchestrator | 2026-02-02 01:19:17 | INFO  | Flavor SCS-1V-4-10 created 2026-02-02 01:19:21.340242 | orchestrator | 2026-02-02 01:19:17 | INFO  | Flavor SCS-1V-8 created 2026-02-02 01:19:21.340254 | orchestrator | 2026-02-02 01:19:17 | INFO  | Flavor SCS-1V-8-20 created 2026-02-02 01:19:21.340276 | orchestrator | 2026-02-02 01:19:17 | INFO  | Flavor SCS-2V-4 created 2026-02-02 01:19:21.340287 | orchestrator | 2026-02-02 01:19:17 | INFO  | Flavor SCS-2V-4-10 created 2026-02-02 01:19:21.340299 | orchestrator | 2026-02-02 01:19:17 | INFO  | Flavor SCS-2V-8 created 2026-02-02 01:19:21.340310 | orchestrator | 2026-02-02 01:19:18 | INFO  | Flavor SCS-2V-8-20 created 2026-02-02 01:19:21.340321 | orchestrator | 2026-02-02 01:19:18 | INFO  | Flavor SCS-2V-16 created 2026-02-02 01:19:21.340332 | orchestrator | 2026-02-02 01:19:18 | INFO  | Flavor SCS-2V-16-50 created 2026-02-02 01:19:21.340342 | orchestrator | 2026-02-02 01:19:18 | INFO  | Flavor SCS-4V-8 created 2026-02-02 01:19:21.340354 | orchestrator | 2026-02-02 01:19:18 | INFO  | Flavor SCS-4V-8-20 created 2026-02-02 01:19:21.340364 | orchestrator | 2026-02-02 01:19:18 | INFO  | Flavor SCS-4V-16 created 2026-02-02 01:19:21.340375 | orchestrator | 2026-02-02 01:19:19 | INFO  | Flavor SCS-4V-16-50 created 2026-02-02 01:19:21.340386 | orchestrator | 2026-02-02 01:19:19 | INFO  | Flavor SCS-4V-32 created 2026-02-02 01:19:21.340397 | orchestrator | 2026-02-02 01:19:19 | INFO  | Flavor SCS-4V-32-100 created 2026-02-02 01:19:21.340408 | orchestrator | 2026-02-02 01:19:19 | INFO  | Flavor SCS-8V-16 created 2026-02-02 01:19:21.340419 | orchestrator | 2026-02-02 01:19:19 | INFO  | Flavor SCS-8V-16-50 created 2026-02-02 01:19:21.340431 | orchestrator | 2026-02-02 01:19:20 | INFO  | Flavor SCS-8V-32 created 2026-02-02 01:19:21.340442 | orchestrator | 2026-02-02 01:19:20 | INFO  | Flavor SCS-8V-32-100 created 2026-02-02 01:19:21.340452 | orchestrator | 2026-02-02 01:19:20 | INFO  | Flavor SCS-16V-32 created 2026-02-02 01:19:21.340463 | orchestrator | 2026-02-02 01:19:20 | INFO  | Flavor SCS-16V-32-100 created 2026-02-02 01:19:21.340474 | orchestrator | 2026-02-02 01:19:20 | INFO  | Flavor SCS-2V-4-20s created 2026-02-02 01:19:21.340485 | orchestrator | 2026-02-02 01:19:20 | INFO  | Flavor SCS-4V-8-50s created 2026-02-02 01:19:21.340496 | orchestrator | 2026-02-02 01:19:21 | INFO  | Flavor SCS-8V-32-100s created 2026-02-02 01:19:23.926747 | orchestrator | 2026-02-02 01:19:23 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-02 01:19:34.050087 | orchestrator | 2026-02-02 01:19:34 | INFO  | Prepare task for execution of bootstrap-basic. 2026-02-02 01:19:34.135693 | orchestrator | 2026-02-02 01:19:34 | INFO  | Task 98195a6d-73bc-438d-8b57-1657629823ef (bootstrap-basic) was prepared for execution. 2026-02-02 01:19:34.135771 | orchestrator | 2026-02-02 01:19:34 | INFO  | It takes a moment until task 98195a6d-73bc-438d-8b57-1657629823ef (bootstrap-basic) has been started and output is visible here. 2026-02-02 01:20:23.221294 | orchestrator | 2026-02-02 01:20:23.221437 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-02 01:20:23.221462 | orchestrator | 2026-02-02 01:20:23.221482 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 01:20:23.221501 | orchestrator | Monday 02 February 2026 01:19:39 +0000 (0:00:00.088) 0:00:00.088 ******* 2026-02-02 01:20:23.221519 | orchestrator | ok: [localhost] 2026-02-02 01:20:23.221537 | orchestrator | 2026-02-02 01:20:23.221556 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-02 01:20:23.221577 | orchestrator | Monday 02 February 2026 01:19:41 +0000 (0:00:02.083) 0:00:02.171 ******* 2026-02-02 01:20:23.221594 | orchestrator | ok: [localhost] 2026-02-02 01:20:23.221612 | orchestrator | 2026-02-02 01:20:23.221629 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-02 01:20:23.221647 | orchestrator | Monday 02 February 2026 01:19:50 +0000 (0:00:08.947) 0:00:11.119 ******* 2026-02-02 01:20:23.221666 | orchestrator | changed: [localhost] 2026-02-02 01:20:23.221685 | orchestrator | 2026-02-02 01:20:23.221705 | orchestrator | TASK [Create public network] *************************************************** 2026-02-02 01:20:23.221723 | orchestrator | Monday 02 February 2026 01:19:57 +0000 (0:00:07.842) 0:00:18.962 ******* 2026-02-02 01:20:23.221741 | orchestrator | changed: [localhost] 2026-02-02 01:20:23.221760 | orchestrator | 2026-02-02 01:20:23.221780 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-02 01:20:23.221805 | orchestrator | Monday 02 February 2026 01:20:03 +0000 (0:00:05.989) 0:00:24.952 ******* 2026-02-02 01:20:23.221824 | orchestrator | changed: [localhost] 2026-02-02 01:20:23.221842 | orchestrator | 2026-02-02 01:20:23.221862 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-02 01:20:23.221882 | orchestrator | Monday 02 February 2026 01:20:10 +0000 (0:00:06.708) 0:00:31.661 ******* 2026-02-02 01:20:23.221901 | orchestrator | changed: [localhost] 2026-02-02 01:20:23.221920 | orchestrator | 2026-02-02 01:20:23.221941 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-02 01:20:23.221960 | orchestrator | Monday 02 February 2026 01:20:15 +0000 (0:00:04.388) 0:00:36.049 ******* 2026-02-02 01:20:23.221980 | orchestrator | changed: [localhost] 2026-02-02 01:20:23.221994 | orchestrator | 2026-02-02 01:20:23.222079 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-02 01:20:23.222096 | orchestrator | Monday 02 February 2026 01:20:19 +0000 (0:00:04.083) 0:00:40.133 ******* 2026-02-02 01:20:23.222144 | orchestrator | ok: [localhost] 2026-02-02 01:20:23.222156 | orchestrator | 2026-02-02 01:20:23.222167 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 01:20:23.222178 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 01:20:23.222190 | orchestrator | 2026-02-02 01:20:23.222221 | orchestrator | 2026-02-02 01:20:23.222254 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 01:20:23.222273 | orchestrator | Monday 02 February 2026 01:20:22 +0000 (0:00:03.796) 0:00:43.930 ******* 2026-02-02 01:20:23.222291 | orchestrator | =============================================================================== 2026-02-02 01:20:23.222309 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.95s 2026-02-02 01:20:23.222327 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.84s 2026-02-02 01:20:23.222345 | orchestrator | Set public network to default ------------------------------------------- 6.71s 2026-02-02 01:20:23.222397 | orchestrator | Create public network --------------------------------------------------- 5.99s 2026-02-02 01:20:23.222416 | orchestrator | Create public subnet ---------------------------------------------------- 4.39s 2026-02-02 01:20:23.222432 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.08s 2026-02-02 01:20:23.222447 | orchestrator | Create manager role ----------------------------------------------------- 3.80s 2026-02-02 01:20:23.222462 | orchestrator | Gathering Facts --------------------------------------------------------- 2.08s 2026-02-02 01:20:25.902280 | orchestrator | 2026-02-02 01:20:25 | INFO  | It takes a moment until task 82a588f0-0fde-4449-b6d1-4598f1810087 (image-manager) has been started and output is visible here. 2026-02-02 01:21:04.646419 | orchestrator | 2026-02-02 01:20:28 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-02 01:21:04.646532 | orchestrator | 2026-02-02 01:20:29 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-02 01:21:04.646551 | orchestrator | 2026-02-02 01:20:29 | INFO  | Importing image Cirros 0.6.2 2026-02-02 01:21:04.646563 | orchestrator | 2026-02-02 01:20:29 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-02 01:21:04.646576 | orchestrator | 2026-02-02 01:20:31 | INFO  | Waiting for image to leave queued state... 2026-02-02 01:21:04.646588 | orchestrator | 2026-02-02 01:20:33 | INFO  | Waiting for import to complete... 2026-02-02 01:21:04.646601 | orchestrator | 2026-02-02 01:20:44 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-02 01:21:04.646612 | orchestrator | 2026-02-02 01:20:44 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-02 01:21:04.646623 | orchestrator | 2026-02-02 01:20:44 | INFO  | Setting internal_version = 0.6.2 2026-02-02 01:21:04.646634 | orchestrator | 2026-02-02 01:20:44 | INFO  | Setting image_original_user = cirros 2026-02-02 01:21:04.646646 | orchestrator | 2026-02-02 01:20:44 | INFO  | Adding tag os:cirros 2026-02-02 01:21:04.646657 | orchestrator | 2026-02-02 01:20:44 | INFO  | Setting property architecture: x86_64 2026-02-02 01:21:04.646668 | orchestrator | 2026-02-02 01:20:44 | INFO  | Setting property hw_disk_bus: scsi 2026-02-02 01:21:04.646678 | orchestrator | 2026-02-02 01:20:45 | INFO  | Setting property hw_rng_model: virtio 2026-02-02 01:21:04.646689 | orchestrator | 2026-02-02 01:20:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-02 01:21:04.646701 | orchestrator | 2026-02-02 01:20:45 | INFO  | Setting property hw_watchdog_action: reset 2026-02-02 01:21:04.646712 | orchestrator | 2026-02-02 01:20:45 | INFO  | Setting property hypervisor_type: qemu 2026-02-02 01:21:04.646723 | orchestrator | 2026-02-02 01:20:45 | INFO  | Setting property os_distro: cirros 2026-02-02 01:21:04.646734 | orchestrator | 2026-02-02 01:20:46 | INFO  | Setting property os_purpose: minimal 2026-02-02 01:21:04.646745 | orchestrator | 2026-02-02 01:20:46 | INFO  | Setting property replace_frequency: never 2026-02-02 01:21:04.646756 | orchestrator | 2026-02-02 01:20:46 | INFO  | Setting property uuid_validity: none 2026-02-02 01:21:04.646766 | orchestrator | 2026-02-02 01:20:46 | INFO  | Setting property provided_until: none 2026-02-02 01:21:04.646777 | orchestrator | 2026-02-02 01:20:46 | INFO  | Setting property image_description: Cirros 2026-02-02 01:21:04.646788 | orchestrator | 2026-02-02 01:20:47 | INFO  | Setting property image_name: Cirros 2026-02-02 01:21:04.646799 | orchestrator | 2026-02-02 01:20:47 | INFO  | Setting property internal_version: 0.6.2 2026-02-02 01:21:04.646836 | orchestrator | 2026-02-02 01:20:47 | INFO  | Setting property image_original_user: cirros 2026-02-02 01:21:04.646848 | orchestrator | 2026-02-02 01:20:47 | INFO  | Setting property os_version: 0.6.2 2026-02-02 01:21:04.646868 | orchestrator | 2026-02-02 01:20:47 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-02 01:21:04.646881 | orchestrator | 2026-02-02 01:20:47 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-02 01:21:04.646894 | orchestrator | 2026-02-02 01:20:48 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-02 01:21:04.646907 | orchestrator | 2026-02-02 01:20:48 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-02 01:21:04.646920 | orchestrator | 2026-02-02 01:20:48 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-02 01:21:04.646937 | orchestrator | 2026-02-02 01:20:48 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-02 01:21:04.646951 | orchestrator | 2026-02-02 01:20:48 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-02 01:21:04.646963 | orchestrator | 2026-02-02 01:20:48 | INFO  | Importing image Cirros 0.6.3 2026-02-02 01:21:04.646976 | orchestrator | 2026-02-02 01:20:48 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-02 01:21:04.646996 | orchestrator | 2026-02-02 01:20:50 | INFO  | Waiting for import to complete... 2026-02-02 01:21:04.647020 | orchestrator | 2026-02-02 01:21:00 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-02 01:21:04.647071 | orchestrator | 2026-02-02 01:21:00 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-02 01:21:04.647091 | orchestrator | 2026-02-02 01:21:00 | INFO  | Setting internal_version = 0.6.3 2026-02-02 01:21:04.647110 | orchestrator | 2026-02-02 01:21:00 | INFO  | Setting image_original_user = cirros 2026-02-02 01:21:04.647128 | orchestrator | 2026-02-02 01:21:00 | INFO  | Adding tag os:cirros 2026-02-02 01:21:04.647147 | orchestrator | 2026-02-02 01:21:00 | INFO  | Setting property architecture: x86_64 2026-02-02 01:21:04.647167 | orchestrator | 2026-02-02 01:21:00 | INFO  | Setting property hw_disk_bus: scsi 2026-02-02 01:21:04.647186 | orchestrator | 2026-02-02 01:21:01 | INFO  | Setting property hw_rng_model: virtio 2026-02-02 01:21:04.647206 | orchestrator | 2026-02-02 01:21:01 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-02 01:21:04.647221 | orchestrator | 2026-02-02 01:21:01 | INFO  | Setting property hw_watchdog_action: reset 2026-02-02 01:21:04.647234 | orchestrator | 2026-02-02 01:21:01 | INFO  | Setting property hypervisor_type: qemu 2026-02-02 01:21:04.647278 | orchestrator | 2026-02-02 01:21:01 | INFO  | Setting property os_distro: cirros 2026-02-02 01:21:04.647292 | orchestrator | 2026-02-02 01:21:01 | INFO  | Setting property os_purpose: minimal 2026-02-02 01:21:04.647303 | orchestrator | 2026-02-02 01:21:02 | INFO  | Setting property replace_frequency: never 2026-02-02 01:21:04.647314 | orchestrator | 2026-02-02 01:21:02 | INFO  | Setting property uuid_validity: none 2026-02-02 01:21:04.647326 | orchestrator | 2026-02-02 01:21:02 | INFO  | Setting property provided_until: none 2026-02-02 01:21:04.647336 | orchestrator | 2026-02-02 01:21:02 | INFO  | Setting property image_description: Cirros 2026-02-02 01:21:04.647347 | orchestrator | 2026-02-02 01:21:02 | INFO  | Setting property image_name: Cirros 2026-02-02 01:21:04.647358 | orchestrator | 2026-02-02 01:21:03 | INFO  | Setting property internal_version: 0.6.3 2026-02-02 01:21:04.647381 | orchestrator | 2026-02-02 01:21:03 | INFO  | Setting property image_original_user: cirros 2026-02-02 01:21:04.647392 | orchestrator | 2026-02-02 01:21:03 | INFO  | Setting property os_version: 0.6.3 2026-02-02 01:21:04.647402 | orchestrator | 2026-02-02 01:21:03 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-02 01:21:04.647413 | orchestrator | 2026-02-02 01:21:03 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-02 01:21:04.647424 | orchestrator | 2026-02-02 01:21:04 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-02 01:21:04.647435 | orchestrator | 2026-02-02 01:21:04 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-02 01:21:04.647446 | orchestrator | 2026-02-02 01:21:04 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-02 01:21:05.040920 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-02 01:21:07.464963 | orchestrator | 2026-02-02 01:21:07 | INFO  | date: 2026-01-30 2026-02-02 01:21:07.465081 | orchestrator | 2026-02-02 01:21:07 | INFO  | image: octavia-amphora-haproxy-2025.1.20260130.qcow2 2026-02-02 01:21:07.465122 | orchestrator | 2026-02-02 01:21:07 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260130.qcow2 2026-02-02 01:21:07.465865 | orchestrator | 2026-02-02 01:21:07 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260130.qcow2.CHECKSUM 2026-02-02 01:21:07.586245 | orchestrator | 2026-02-02 01:21:07 | INFO  | checksum: 60117284323820c1e4b4444ed885acebaec4e5ce9770f1f4ff254866b0480153 2026-02-02 01:21:07.665353 | orchestrator | 2026-02-02 01:21:07 | INFO  | It takes a moment until task 64b712e2-601c-41c4-a954-f30f02af4290 (image-manager) has been started and output is visible here. 2026-02-02 01:23:54.069560 | orchestrator | 2026-02-02 01:21:09 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-01-30' 2026-02-02 01:23:54.069663 | orchestrator | 2026-02-02 01:21:09 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260130.qcow2: 200 2026-02-02 01:23:54.069679 | orchestrator | 2026-02-02 01:21:09 | INFO  | Importing image OpenStack Octavia Amphora 2026-01-30 2026-02-02 01:23:54.069689 | orchestrator | 2026-02-02 01:21:09 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260130.qcow2 2026-02-02 01:23:54.069699 | orchestrator | 2026-02-02 01:21:11 | INFO  | Waiting for image to leave queued state... 2026-02-02 01:23:54.069708 | orchestrator | 2026-02-02 01:21:13 | INFO  | Waiting for import to complete... 2026-02-02 01:23:54.069719 | orchestrator | 2026-02-02 01:21:23 | INFO  | Waiting for import to complete... 2026-02-02 01:23:54.069728 | orchestrator | 2026-02-02 01:21:33 | INFO  | Waiting for import to complete... 2026-02-02 01:23:54.069738 | orchestrator | 2026-02-02 01:21:43 | INFO  | Waiting for import to complete... 2026-02-02 01:23:54.069749 | orchestrator | 2026-02-02 01:21:54 | INFO  | Waiting for import to complete... 2026-02-02 01:23:54.069760 | orchestrator | 2026-02-02 01:22:04 | INFO  | Waiting for import to complete... 2026-02-02 01:23:54.069768 | orchestrator | 2026-02-02 01:22:14 | INFO  | Waiting for import to complete... 2026-02-02 01:23:54.069825 | orchestrator | 2026-02-02 01:22:24 | INFO  | Waiting for image to leave queued state... 2026-02-02 01:23:54.069834 | orchestrator | 2026-02-02 01:22:26 | INFO  | Waiting for image to leave queued state... 2026-02-02 01:23:54.069862 | orchestrator | 2026-02-02 01:22:28 | INFO  | Waiting for image to leave queued state... 2026-02-02 01:23:54.069872 | orchestrator | 2026-02-02 01:22:30 | INFO  | Waiting for image to leave queued state... 2026-02-02 01:23:54.069881 | orchestrator | 2026-02-02 01:22:32 | ERROR  | Image OpenStack Octavia Amphora 2026-01-30 seems stuck in queued state 2026-02-02 01:23:54.069892 | orchestrator | 2026-02-02 01:22:32 | WARNING  | Deleting stuck image OpenStack Octavia Amphora 2026-01-30 and retrying import 2026-02-02 01:23:54.069901 | orchestrator | 2026-02-02 01:22:32 | INFO  | Retry attempt 1/1 for image OpenStack Octavia Amphora 2026-01-30 2026-02-02 01:23:54.069910 | orchestrator | 2026-02-02 01:22:33 | INFO  | Waiting for image to leave queued state... 2026-02-02 01:23:54.069919 | orchestrator | 2026-02-02 01:22:35 | INFO  | Waiting for import to complete... 2026-02-02 01:23:54.069929 | orchestrator | 2026-02-02 01:22:45 | INFO  | Waiting for import to complete... 2026-02-02 01:23:54.069938 | orchestrator | 2026-02-02 01:22:55 | INFO  | Waiting for import to complete... 2026-02-02 01:23:54.069948 | orchestrator | 2026-02-02 01:23:05 | INFO  | Waiting for import to complete... 2026-02-02 01:23:54.069957 | orchestrator | 2026-02-02 01:23:15 | INFO  | Waiting for import to complete... 2026-02-02 01:23:54.069966 | orchestrator | 2026-02-02 01:23:25 | INFO  | Waiting for import to complete... 2026-02-02 01:23:54.069974 | orchestrator | 2026-02-02 01:23:35 | INFO  | Waiting for import to complete... 2026-02-02 01:23:54.069983 | orchestrator | 2026-02-02 01:23:45 | INFO  | Waiting for image to leave queued state... 2026-02-02 01:23:54.069999 | orchestrator | 2026-02-02 01:23:47 | INFO  | Waiting for image to leave queued state... 2026-02-02 01:23:54.070009 | orchestrator | 2026-02-02 01:23:49 | INFO  | Waiting for image to leave queued state... 2026-02-02 01:23:54.070083 | orchestrator | 2026-02-02 01:23:51 | INFO  | Waiting for image to leave queued state... 2026-02-02 01:23:54.070093 | orchestrator | 2026-02-02 01:23:53 | ERROR  | Image OpenStack Octavia Amphora 2026-01-30 seems stuck in queued state 2026-02-02 01:23:54.070103 | orchestrator | 2026-02-02 01:23:53 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-02 01:23:54.070113 | orchestrator | 2026-02-02 01:23:53 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-02 01:23:54.070124 | orchestrator | 2026-02-02 01:23:53 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-02 01:23:54.070134 | orchestrator | 2026-02-02 01:23:53 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-02 01:23:54.070144 | orchestrator | 2026-02-02 01:23:54.070158 | orchestrator | ERROR: One or more errors occurred during the execution of the program, please check the output. 2026-02-02 01:23:55.042410 | orchestrator | ERROR 2026-02-02 01:23:55.042967 | orchestrator | { 2026-02-02 01:23:55.043084 | orchestrator | "delta": "0:04:48.583796", 2026-02-02 01:23:55.043155 | orchestrator | "end": "2026-02-02 01:23:54.686508", 2026-02-02 01:23:55.043216 | orchestrator | "msg": "non-zero return code", 2026-02-02 01:23:55.043271 | orchestrator | "rc": 1, 2026-02-02 01:23:55.043324 | orchestrator | "start": "2026-02-02 01:19:06.102712" 2026-02-02 01:23:55.043376 | orchestrator | } failure 2026-02-02 01:23:55.060108 | 2026-02-02 01:23:55.060226 | PLAY RECAP 2026-02-02 01:23:55.060298 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-02-02 01:23:55.060334 | 2026-02-02 01:23:55.268972 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-02-02 01:23:55.270137 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-02 01:23:56.038359 | 2026-02-02 01:23:56.038544 | PLAY [Post output play] 2026-02-02 01:23:56.054636 | 2026-02-02 01:23:56.054775 | LOOP [stage-output : Register sources] 2026-02-02 01:23:56.123157 | 2026-02-02 01:23:56.123460 | TASK [stage-output : Check sudo] 2026-02-02 01:23:56.983648 | orchestrator | sudo: a password is required 2026-02-02 01:23:57.159068 | orchestrator | ok: Runtime: 0:00:00.022286 2026-02-02 01:23:57.169575 | 2026-02-02 01:23:57.169705 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-02 01:23:57.203861 | 2026-02-02 01:23:57.204071 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-02 01:23:57.268342 | orchestrator | ok 2026-02-02 01:23:57.275822 | 2026-02-02 01:23:57.275945 | LOOP [stage-output : Ensure target folders exist] 2026-02-02 01:23:57.753002 | orchestrator | ok: "docs" 2026-02-02 01:23:57.753374 | 2026-02-02 01:23:58.010941 | orchestrator | ok: "artifacts" 2026-02-02 01:23:58.264191 | orchestrator | ok: "logs" 2026-02-02 01:23:58.284412 | 2026-02-02 01:23:58.284603 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-02 01:23:58.325619 | 2026-02-02 01:23:58.325916 | TASK [stage-output : Make all log files readable] 2026-02-02 01:23:58.652508 | orchestrator | ok 2026-02-02 01:23:58.666187 | 2026-02-02 01:23:58.666403 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-02 01:23:58.702650 | orchestrator | skipping: Conditional result was False 2026-02-02 01:23:58.718304 | 2026-02-02 01:23:58.718472 | TASK [stage-output : Discover log files for compression] 2026-02-02 01:23:58.744929 | orchestrator | skipping: Conditional result was False 2026-02-02 01:23:58.763761 | 2026-02-02 01:23:58.763945 | LOOP [stage-output : Archive everything from logs] 2026-02-02 01:23:58.812016 | 2026-02-02 01:23:58.812220 | PLAY [Post cleanup play] 2026-02-02 01:23:58.821099 | 2026-02-02 01:23:58.821209 | TASK [Set cloud fact (Zuul deployment)] 2026-02-02 01:23:58.879608 | orchestrator | ok 2026-02-02 01:23:58.890261 | 2026-02-02 01:23:58.890381 | TASK [Set cloud fact (local deployment)] 2026-02-02 01:23:58.924394 | orchestrator | skipping: Conditional result was False 2026-02-02 01:23:58.935904 | 2026-02-02 01:23:58.936140 | TASK [Clean the cloud environment] 2026-02-02 01:24:01.433012 | orchestrator | 2026-02-02 01:24:01 - clean up servers 2026-02-02 01:24:02.196862 | orchestrator | 2026-02-02 01:24:02 - testbed-manager 2026-02-02 01:24:02.286067 | orchestrator | 2026-02-02 01:24:02 - testbed-node-1 2026-02-02 01:24:02.371494 | orchestrator | 2026-02-02 01:24:02 - testbed-node-2 2026-02-02 01:24:02.460049 | orchestrator | 2026-02-02 01:24:02 - testbed-node-0 2026-02-02 01:24:02.548615 | orchestrator | 2026-02-02 01:24:02 - testbed-node-3 2026-02-02 01:24:02.640227 | orchestrator | 2026-02-02 01:24:02 - testbed-node-4 2026-02-02 01:24:02.732343 | orchestrator | 2026-02-02 01:24:02 - testbed-node-5 2026-02-02 01:24:02.820906 | orchestrator | 2026-02-02 01:24:02 - clean up keypairs 2026-02-02 01:24:02.839275 | orchestrator | 2026-02-02 01:24:02 - testbed 2026-02-02 01:24:02.863257 | orchestrator | 2026-02-02 01:24:02 - wait for servers to be gone 2026-02-02 01:24:11.592038 | orchestrator | 2026-02-02 01:24:11 - clean up ports 2026-02-02 01:24:11.796435 | orchestrator | 2026-02-02 01:24:11 - 109fcbae-e068-4ec0-9a57-0bcd669b6061 2026-02-02 01:24:12.074588 | orchestrator | 2026-02-02 01:24:12 - 174f4ef9-675c-42b9-8c40-80fdba783ff1 2026-02-02 01:24:12.344802 | orchestrator | 2026-02-02 01:24:12 - 45b2d860-437a-4ea4-bc2c-93a4fd2d6e87 2026-02-02 01:24:12.724257 | orchestrator | 2026-02-02 01:24:12 - 85cecd28-bad0-4700-b2ba-bd56756df6fa 2026-02-02 01:24:12.948492 | orchestrator | 2026-02-02 01:24:12 - da8b09fb-e0e9-476d-9a52-518019a88216 2026-02-02 01:24:13.164389 | orchestrator | 2026-02-02 01:24:13 - ed72477b-afc9-4737-88f3-f7a309d5e51b 2026-02-02 01:24:13.384913 | orchestrator | 2026-02-02 01:24:13 - feb2fc7f-d388-4b39-a34d-62df955058d2 2026-02-02 01:24:13.627410 | orchestrator | 2026-02-02 01:24:13 - clean up volumes 2026-02-02 01:24:13.750003 | orchestrator | 2026-02-02 01:24:13 - testbed-volume-2-node-base 2026-02-02 01:24:13.787096 | orchestrator | 2026-02-02 01:24:13 - testbed-volume-3-node-base 2026-02-02 01:24:13.831367 | orchestrator | 2026-02-02 01:24:13 - testbed-volume-0-node-base 2026-02-02 01:24:13.873897 | orchestrator | 2026-02-02 01:24:13 - testbed-volume-4-node-base 2026-02-02 01:24:13.915109 | orchestrator | 2026-02-02 01:24:13 - testbed-volume-manager-base 2026-02-02 01:24:13.957037 | orchestrator | 2026-02-02 01:24:13 - testbed-volume-1-node-base 2026-02-02 01:24:14.000661 | orchestrator | 2026-02-02 01:24:14 - testbed-volume-5-node-base 2026-02-02 01:24:14.043041 | orchestrator | 2026-02-02 01:24:14 - testbed-volume-2-node-5 2026-02-02 01:24:14.084053 | orchestrator | 2026-02-02 01:24:14 - testbed-volume-0-node-3 2026-02-02 01:24:14.126339 | orchestrator | 2026-02-02 01:24:14 - testbed-volume-7-node-4 2026-02-02 01:24:14.169449 | orchestrator | 2026-02-02 01:24:14 - testbed-volume-4-node-4 2026-02-02 01:24:14.215103 | orchestrator | 2026-02-02 01:24:14 - testbed-volume-3-node-3 2026-02-02 01:24:14.257377 | orchestrator | 2026-02-02 01:24:14 - testbed-volume-1-node-4 2026-02-02 01:24:14.299512 | orchestrator | 2026-02-02 01:24:14 - testbed-volume-8-node-5 2026-02-02 01:24:14.348039 | orchestrator | 2026-02-02 01:24:14 - testbed-volume-5-node-5 2026-02-02 01:24:14.391772 | orchestrator | 2026-02-02 01:24:14 - testbed-volume-6-node-3 2026-02-02 01:24:14.433487 | orchestrator | 2026-02-02 01:24:14 - disconnect routers 2026-02-02 01:24:15.005120 | orchestrator | 2026-02-02 01:24:15 - testbed 2026-02-02 01:24:15.979781 | orchestrator | 2026-02-02 01:24:15 - clean up subnets 2026-02-02 01:24:16.032917 | orchestrator | 2026-02-02 01:24:16 - subnet-testbed-management 2026-02-02 01:24:16.191093 | orchestrator | 2026-02-02 01:24:16 - clean up networks 2026-02-02 01:24:16.359651 | orchestrator | 2026-02-02 01:24:16 - net-testbed-management 2026-02-02 01:24:16.667023 | orchestrator | 2026-02-02 01:24:16 - clean up security groups 2026-02-02 01:24:16.708355 | orchestrator | 2026-02-02 01:24:16 - testbed-management 2026-02-02 01:24:16.840774 | orchestrator | 2026-02-02 01:24:16 - testbed-node 2026-02-02 01:24:16.945169 | orchestrator | 2026-02-02 01:24:16 - clean up floating ips 2026-02-02 01:24:16.984533 | orchestrator | 2026-02-02 01:24:16 - 81.163.193.61 2026-02-02 01:24:17.342248 | orchestrator | 2026-02-02 01:24:17 - clean up routers 2026-02-02 01:24:17.454264 | orchestrator | 2026-02-02 01:24:17 - testbed 2026-02-02 01:24:18.998789 | orchestrator | ok: Runtime: 0:00:19.591418 2026-02-02 01:24:19.003190 | 2026-02-02 01:24:19.003358 | PLAY RECAP 2026-02-02 01:24:19.003487 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-02 01:24:19.003576 | 2026-02-02 01:24:19.140922 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-02 01:24:19.143563 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-02 01:24:19.874768 | 2026-02-02 01:24:19.874945 | PLAY [Cleanup play] 2026-02-02 01:24:19.890791 | 2026-02-02 01:24:19.890955 | TASK [Set cloud fact (Zuul deployment)] 2026-02-02 01:24:19.946425 | orchestrator | ok 2026-02-02 01:24:19.955666 | 2026-02-02 01:24:19.955827 | TASK [Set cloud fact (local deployment)] 2026-02-02 01:24:19.990223 | orchestrator | skipping: Conditional result was False 2026-02-02 01:24:20.005347 | 2026-02-02 01:24:20.005494 | TASK [Clean the cloud environment] 2026-02-02 01:24:21.165241 | orchestrator | 2026-02-02 01:24:21 - clean up servers 2026-02-02 01:24:21.651315 | orchestrator | 2026-02-02 01:24:21 - clean up keypairs 2026-02-02 01:24:21.670157 | orchestrator | 2026-02-02 01:24:21 - wait for servers to be gone 2026-02-02 01:24:21.718625 | orchestrator | 2026-02-02 01:24:21 - clean up ports 2026-02-02 01:24:21.796447 | orchestrator | 2026-02-02 01:24:21 - clean up volumes 2026-02-02 01:24:21.882815 | orchestrator | 2026-02-02 01:24:21 - disconnect routers 2026-02-02 01:24:21.925451 | orchestrator | 2026-02-02 01:24:21 - clean up subnets 2026-02-02 01:24:21.948316 | orchestrator | 2026-02-02 01:24:21 - clean up networks 2026-02-02 01:24:22.101200 | orchestrator | 2026-02-02 01:24:22 - clean up security groups 2026-02-02 01:24:22.138826 | orchestrator | 2026-02-02 01:24:22 - clean up floating ips 2026-02-02 01:24:22.165101 | orchestrator | 2026-02-02 01:24:22 - clean up routers 2026-02-02 01:24:22.544431 | orchestrator | ok: Runtime: 0:00:01.431801 2026-02-02 01:24:22.548701 | 2026-02-02 01:24:22.548878 | PLAY RECAP 2026-02-02 01:24:22.549023 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-02 01:24:22.549097 | 2026-02-02 01:24:22.684233 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-02 01:24:22.685303 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-02 01:24:23.424990 | 2026-02-02 01:24:23.425142 | PLAY [Base post-fetch] 2026-02-02 01:24:23.440249 | 2026-02-02 01:24:23.440379 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-02 01:24:23.497112 | orchestrator | skipping: Conditional result was False 2026-02-02 01:24:23.503844 | 2026-02-02 01:24:23.503979 | TASK [fetch-output : Set log path for single node] 2026-02-02 01:24:23.553293 | orchestrator | ok 2026-02-02 01:24:23.559195 | 2026-02-02 01:24:23.559301 | LOOP [fetch-output : Ensure local output dirs] 2026-02-02 01:24:24.046581 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/b4e47cf19b7542679b401536a50ab8f8/work/logs" 2026-02-02 01:24:24.344150 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b4e47cf19b7542679b401536a50ab8f8/work/artifacts" 2026-02-02 01:24:24.619651 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b4e47cf19b7542679b401536a50ab8f8/work/docs" 2026-02-02 01:24:24.634160 | 2026-02-02 01:24:24.634283 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-02 01:24:25.566405 | orchestrator | changed: .d..t...... ./ 2026-02-02 01:24:25.566673 | orchestrator | changed: All items complete 2026-02-02 01:24:25.566713 | 2026-02-02 01:24:26.313277 | orchestrator | changed: .d..t...... ./ 2026-02-02 01:24:27.035486 | orchestrator | changed: .d..t...... ./ 2026-02-02 01:24:27.057659 | 2026-02-02 01:24:27.057797 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-02 01:24:27.083735 | orchestrator | skipping: Conditional result was False 2026-02-02 01:24:27.086646 | orchestrator | skipping: Conditional result was False 2026-02-02 01:24:27.108205 | 2026-02-02 01:24:27.108314 | PLAY RECAP 2026-02-02 01:24:27.108384 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-02 01:24:27.108421 | 2026-02-02 01:24:27.229123 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-02 01:24:27.230209 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-02 01:24:27.954930 | 2026-02-02 01:24:27.955085 | PLAY [Base post] 2026-02-02 01:24:27.969557 | 2026-02-02 01:24:27.969687 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-02 01:24:29.593643 | orchestrator | changed 2026-02-02 01:24:29.602714 | 2026-02-02 01:24:29.602876 | PLAY RECAP 2026-02-02 01:24:29.602952 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-02 01:24:29.603021 | 2026-02-02 01:24:29.725259 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-02 01:24:29.727991 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-02 01:24:30.511312 | 2026-02-02 01:24:30.511482 | PLAY [Base post-logs] 2026-02-02 01:24:30.521971 | 2026-02-02 01:24:30.522110 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-02 01:24:30.986437 | localhost | changed 2026-02-02 01:24:31.002712 | 2026-02-02 01:24:31.002935 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-02 01:24:31.031651 | localhost | ok 2026-02-02 01:24:31.038611 | 2026-02-02 01:24:31.038787 | TASK [Set zuul-log-path fact] 2026-02-02 01:24:31.056293 | localhost | ok 2026-02-02 01:24:31.068257 | 2026-02-02 01:24:31.068387 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-02 01:24:31.095228 | localhost | ok 2026-02-02 01:24:31.101411 | 2026-02-02 01:24:31.101627 | TASK [upload-logs : Create log directories] 2026-02-02 01:24:31.622160 | localhost | changed 2026-02-02 01:24:31.627091 | 2026-02-02 01:24:31.627262 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-02 01:24:32.121261 | localhost -> localhost | ok: Runtime: 0:00:00.007617 2026-02-02 01:24:32.130792 | 2026-02-02 01:24:32.131071 | TASK [upload-logs : Upload logs to log server] 2026-02-02 01:24:32.688215 | localhost | Output suppressed because no_log was given 2026-02-02 01:24:32.690908 | 2026-02-02 01:24:32.691058 | LOOP [upload-logs : Compress console log and json output] 2026-02-02 01:24:32.744214 | localhost | skipping: Conditional result was False 2026-02-02 01:24:32.749625 | localhost | skipping: Conditional result was False 2026-02-02 01:24:32.758926 | 2026-02-02 01:24:32.759040 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-02 01:24:32.813249 | localhost | skipping: Conditional result was False 2026-02-02 01:24:32.813965 | 2026-02-02 01:24:32.818237 | localhost | skipping: Conditional result was False 2026-02-02 01:24:32.828437 | 2026-02-02 01:24:32.828677 | LOOP [upload-logs : Upload console log and json output]